Share via

How to record LARGE failed request body correctly in API Management?

jinxjer lee 0 Reputation points
2026-04-08T04:56:53.3166667+00:00

I am using Azure API Management (Developer/Premium tier) deployed in an Internal VNET. I am trying to capture the Request Body only when a backend error occurs (specifically 400 errors) and save the log in some places. The request body might be large (1mb - 10mb), and I want save all information when api responses with 400 code.

Currently logging method like azure insights seem not be able to save large request body. I tried to use event hub, but the event hub policy seems also limit the size of request body to 200 kb (https://dori-uw-1.kuma-moon.com/en-us/azure/api-management/log-to-eventhub-policy).

Is there a solution to save the large request body when 400 error code returned? Could I use other policies like
https://dori-uw-1.kuma-moon.com/en-us/azure/api-management/send-one-way-request-policy
to sent the failed request body to my email or some places without small size limitation? Is there a risk to do so?

Thank you

Azure API Management
Azure API Management

An Azure service that provides a hybrid, multi-cloud management platform for APIs.

0 comments No comments

Answer accepted by question author
  1. Siddhesh Desai 5,055 Reputation points Microsoft External Staff Moderator
    2026-04-08T06:40:47.59+00:00

    Hi @jinxjer lee

    Thank you for reaching out to Microsoft Q&A.

    Azure API Management (APIM) is designed as an API gateway and policy engine, not as a logging or payload archival system. When dealing with large request bodies (for example, 1 MB to 10 MB), APIM enforces multiple internal limits to protect gateway stability, memory usage, and performance. Because of these limits, built-in logging mechanisms such as Azure Monitor/Application Insights truncate request bodies by design, and policies like log-to-EventHub enforce a strict payload size limit (around 200 KB). Even other outbound policies such as send-one-way-request still process the payload inside the APIM runtime and are not guaranteed to handle large bodies safely or reliably. As a result, attempting to persist or forward large, failed request bodies (for example, on HTTP 400 responses) directly from APIM is not supported and can lead to throttling, latency issues, gateway failures, or even data loss. This behavior is expected and aligns with APIM’s intended role in the architecture.

    Refer below points to resolve this issue or use these as workarounds:

    Log large request bodies at the backend, not in APIM Move the responsibility of capturing and storing the full request body to your backend service (App Service, Functions, Container Apps, etc.). When a 400 error occurs, the backend can safely store the complete request payload in Blob Storage, a database, or another storage service without APIM size constraints. APIM should only forward the request and handle routing, security, and transformation.

    Use correlation IDs instead of full payload logging in APIM Generate or pass a correlation ID (for example, x-correlation-id) in APIM and forward it to the backend. On failure, log this correlation ID along with metadata such as request size, headers, or hashes in APIM, while the backend stores the full request body. This allows you to correlate logs without storing large payloads in APIM.

    Pre-upload large payloads and pass references through APIM For very large request bodies, a recommended pattern is to have clients upload the payload directly to Blob Storage and then call the API through APIM with only a reference (Blob URL or ID). This avoids large payload handling in APIM entirely and makes error investigation simpler and more reliable.

    If APIM-side logging is required, only log partial data As a compromise, you can log only a small truncated portion of the request body (for example, first 32–64 KB), the total content length, and a hash of the body when a 400 error occurs. This provides diagnostic value without violating APIM size and memory limits.

    Avoid sending large request bodies via email or external webhooks from APIM Using send-one-way-request to send full request bodies to email, Logic Apps, or external services is not recommended for large payloads. This approach increases the risk of gateway instability, performance degradation, and potential security or compliance issues, especially in production environments.

    0 comments No comments

1 additional answer

Sort by: Most helpful
  1. Q&A Assist
    2026-04-08T04:57:12.45+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    Azure API Management policies that send data out of the gateway (such as log-to-eventhub, send-request, or send-one-way-request) are constrained by the limits of the target service and, for log-to-eventhub, by an explicit policy limit.

    From the available information:

    • log-to-eventhub has a hard limit of 200 KB per message from the policy. Messages larger than 200 KB are automatically truncated before being sent to Event Hubs.
    • The sample logging guidance for Event Hubs explicitly recommends truncating HTTP payloads because of Event Hubs message size limits and cost/throughput considerations.

    Because of this 200‑KB cap in the log-to-eventhub policy, API Management cannot reliably push 1–10 MB request bodies into Event Hubs without truncation. The same practical constraints apply to other HTTP-based destinations (including email or custom endpoints) because:

    • The policy engine is designed for control and integration, not for bulk data export.
    • Large bodies significantly increase latency, resource usage, and risk of timeouts or failures.

    The documented workaround for payloads larger than the policy limit is to offload them to Azure Storage instead of trying to send the entire body through the logging policy path. The guidance is:

    • For messages larger than 200 KB, use Azure Storage with API Management as a workaround to bypass the 200‑KB limit.

    In practice, this means:

    1. Use API Management policies to detect error conditions (for example, in outbound or on-error when the backend returns 400).
    2. For failed requests where the body is large, send or stream the body to Azure Storage (for example, a blob) instead of trying to log the full body directly to Event Hubs or Application Insights.
    3. Log only metadata and a reference (such as a blob name or URL) to Event Hubs or Application Insights for correlation and analysis.

    This pattern avoids the Event Hubs policy size limit and keeps logging traffic small while still preserving full failed request bodies in Storage.

    Using send-one-way-request or similar policies to push multi‑MB bodies directly to email or arbitrary endpoints is not recommended because:

    • The policy documentation and examples are oriented around control messages, not multi‑MB payload transfer.
    • Large payloads increase latency for all callers and may cause failures or retries.
    • Email systems and many HTTP endpoints are not designed as reliable large‑payload logging sinks.

    The supported and recommended approach for large payloads is to store them in Azure Storage and log references via Event Hubs or other telemetry.


    References:


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.