TCP Connection Limit Unclear for Azure Web App

Mora 0 Reputation points
2025-10-31T17:53:23.5166667+00:00

On a specific server running a single P2v3 instance I am seeing port exhaustion issues popping up, these issues are being remedied currently through upcoming improvements, however I noticed that its unclear on the diagnostics side what limit I am hitting, as the documentation and metrics don't appear to align with what I am seeing.

The main documented limit I see is that P2v3 has a 'Outbound IP connections per instance' of 3968. When port exhaustion was actively occurring I looked at several areas in the portal:

Diagnostics > TCP Connections > Outbound connections -> Shows hovering around 2.8K (The summary of connections at the bottom which includes inbound doesn't get above 3000)

Azure Web App > Metrics > Connections -> 3.96K

Azure Service Plan > Metrics > Socket count (Outbound/inbound/loopback) -> 2.8K

Furthermore, the diagnostics under TCP Connections showed an error: 'The app {appname} had maximum outbound connections on this instance. There were 569 outbound TCP connections and the below table shows the process and remote endpoint details at this time'. Note that this web app is within a VNET and routes outbound traffic with a NAT Gateway and Public IP.

Why the inconsistencies between these?

Azure App Service
Azure App Service
Azure App Service is a service used to create and deploy scalable, mission-critical web apps.
{count} votes

1 answer

Sort by: Most helpful
  1. Natheem Yousuf 340 Reputation points
    2025-11-13T05:35:45.9233333+00:00

    Hey Mora, this confusion is common.

    Those three numbers come from different measurements and scopes: Azure’s documented “outbound IP connections per instance” is an ephemeral port / SNAT guidance, Metrics → Connections shows an aggregated socket count the App Service host reports, and Diagnostics → TCP Connections can show per-process / per-endpoint snapshots (and may surface a smaller per-process limit hit). Add to that NAT Gateway SNAT port limits (and number of public IPs assigned) and you get more divergence — the app can hit SNAT exhaustion even though the host socket count looks lower.

    What to do next: check per-instance and per-process sockets from the instance (Kudu SSH or netstat -an), enable the App Service Diagnose and solve TCP tools, review NAT Gateway SNAT usage (Standard NAT Gateway + extra Public IPs increase SNAT ports), reuse connections (HTTP keep-alive / HttpClient pooling), and scale-out to reduce per-instance pressure. If numbers still don’t add up, open a support ticket with subscription, region and timestamps so platform logs can be checked.

    Hope this helps


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.