An Azure service for ingesting, preparing, and transforming data at scale.
Hi Bobby John,
The errors you're seeing are consistent with connection pool exhaustion on the Azure PostgreSQL server caused by the 20 concurrent child pipelines.
The two Npgsql error patterns tell different parts of the same story:
The first error — "Exception while reading from stream / Timeout during reading attempt" — means a connection was established but the server became unresponsive mid-query, likely because it was already under connection pressure.
The second error — "The operation has timed out" from Npgsql — is a connection pool timeout, meaning the driver could not obtain a connection within the allowed wait time because all connections were in use.
The most likely root cause is that 20 concurrent ADF pipelines are requesting more simultaneous connections than your PostgreSQL server allows.
Steps to diagnose:
1. Check your server's max_connections value. In the Azure portal go to your PostgreSQL server, then Server parameters, and search for max_connections. The default value depends on your compute tier — a 2 vCore server may be limited to 50 or fewer connections.
2. During the failure window, check the server metrics blade for Active Connections. If it was near or at max_connections when the timeouts occurred, that confirms the diagnosis.
3. Check whether your ADF linked service connection string has an explicit connection pool size or timeout set. If not, Npgsql defaults may be allowing each pipeline to request more
connections than intended.
Solutions to consider:
Option 1 — Enable the built-in PgBouncer connection pooler if you are on Azure PostgreSQL Flexible Server. This is the most effective fix. PgBouncer sits between ADF and PostgreSQL and multiplexes many incoming connections onto a smaller number of actual server connections. It can be enabled directly in the portal under Connection pooling — no infrastructure changes needed.
Option 2 — Reduce the degree of parallelism in ADF. If your parent pipeline is triggering 20 child pipelines simultaneously, consider adding a concurrency limit. In the ForEach
activity settings you can set the batch count to a lower number such as 5 or 10.
Option 3 — Scale up the PostgreSQL compute tier to get a higher max_connections ceiling, as a short-term measure while you implement pooling.
Option 4 — Add a retry policy on the failing ADF activities. Under the activity settings, configure retry with a delay. This will not fix the root cause but will reduce failure rate during spikes.
I hope that helps.