Share via

Azure Event Hubs with Kafka, read_committed consumers stuck

MALINGE Benjamin 0 Reputation points
2026-04-09T12:09:50.46+00:00

Context:

We are using transactional Kafka producers on Azure Event Hubs (Premium tier) with Spring Kafka 3.3.10, and consumers configured with isolation.level=read_committed. In cases where a producer crashes or does not explicitly complete its transaction, our consumers appear to be indefinitely stuck on a specific offset.

Observations:

  • Consumer logs show:

offset=534
lastStableOffset=535
records=0
highWatermark=704

  • Messages exist in the log up to a higher offset, but the consumer receives no records.

Analysis (expected Apache Kafka behavior):

  • In native Kafka, if a transactional producer crashes before commit or abort:
    • The transaction coordinator waits for transaction.timeout.ms
    • Then automatically aborts the transaction
    • The lastStableOffset advances
    • read_committed consumers can resume processing

Transactions rely on commit/abort markers to control message visibility.

Observed behavior on Event Hubs:

  • According to the documentation:

Warning: If the transaction is not committed or aborted before the max.transaction.timeout.ms, the transaction is aborted by Event Hubs automatically. The default max.transaction.timeout.ms is 15 minutes, but the producer can override it via transaction.timeout.ms.

  • In our tests, the automatic unblocking does not seem to occur: the consumer remains stuck on the same offset even after a significant amount of time, and no transaction marker (abort or commit) is written to advance the lastStableOffset.

Conclusion:

It appears that Azure Event Hubs (Kafka mode, Premium tier) does not properly handle automatic transaction termination when max.transaction.timeout.ms is exceeded. Although the transaction should be automatically aborted, this does not seem to happen in practice, leaving read_committed consumers stuck on the affected offset.

Thank you very much.

Azure Event Hubs
0 comments No comments

1 answer

Sort by: Most helpful
  1. Q&A Assist
    2026-04-09T12:10:09.0633333+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    Azure Event Hubs’ Kafka interface currently does not support Kafka transactions or exactly-once semantics, even though there is a separate preview feature for “Kafka Transactions” documented for certain tiers.

    In the Event Hubs Kafka configuration guidance, transactions and EOS are explicitly listed as not supported:

    The following Kafka features and configurations aren't available with Event Hubs:

    Feature Reason
    Transactions Not currently supported
    Exactly-once semantics (EOS) Not currently supported

    Because transactions are not supported at the Kafka protocol level in this configuration, behavior that depends on Kafka’s transaction coordinator semantics—such as automatic abort after transaction.timeout.ms / max.transaction.timeout.ms and advancement of lastStableOffset for read_committed consumers—cannot be relied on to behave like native Apache Kafka. This matches the symptoms described: read_committed consumers can appear “stuck” because the expected commit/abort markers and last-stable-offset progression are not guaranteed.

    Where the transactional API is supported (Kafka Transactions public preview on Premium/Dedicated), the documented model is the standard Kafka pattern: transactional producer with enable.idempotence=true and transactional.id, consumer with isolation.level=read_committed, and automatic abort if the transaction is not completed before max.transaction.timeout.ms (default 15 minutes, overridable via transaction.timeout.ms). However, the separate configuration guidance for Kafka on Event Hubs still states that transactions and EOS are not supported, which means this functionality is limited and not generally available as a fully compatible Kafka feature.

    Given the current documented limitations, the observed behavior—read_committed consumers not unblocking as in native Kafka—is consistent with the fact that Kafka transactions are not fully supported. For workloads that require reliable progress without depending on transactional markers, consider:

    • Avoiding isolation.level=read_committed and using read_uncommitted semantics, or
    • Designing idempotent processing and exactly-once behavior at the application level instead of relying on Kafka transactions.

    For scenarios that must use the transactional preview feature, this behavior should be treated as a potential service issue and raised with Azure Support, referencing the Kafka Transactions preview documentation.


    References:

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.