This issue narrows the idea of #7037.
There are many emitters (at least: AmbariMetricsEmitter, GraphiteEmitter, StatsDEmitter and KafkaEmitter and HttpPostEmitter, I haven't checked other) that use the same producer-consumer pattern for asynchronous emit: emit() pushes the event to some queue (one of the queues), and there is an asynchronous executor that retrieves events from the queue and sends them over network to emit.
AmbariMetricsEmitter and GraphiteEmitter use the same policy when the queue is full (they log a warning). But StatsDEmitter apparently silently discards new events when the queue is full (see NonBlockingStatsDClient code). KafkaEmitter discards new events, but increments "lost events" counts. HttpPostEmitter packs events in batches and drops the oldest batch when overwhelmed, simultaneously logging that (see HttpPostEmitter.limitBuffersToEmitSize() and limitFailedBuffersSize()).
I think all emitters should be similar in this regard. Probably event throttling policy should be configurable.
Related to #2868.
This issue narrows the idea of #7037.
There are many emitters (at least:
AmbariMetricsEmitter,GraphiteEmitter,StatsDEmitterandKafkaEmitterandHttpPostEmitter, I haven't checked other) that use the same producer-consumer pattern for asynchronous emit:emit()pushes the event to some queue (one of the queues), and there is an asynchronous executor that retrieves events from the queue and sends them over network to emit.AmbariMetricsEmitterandGraphiteEmitteruse the same policy when the queue is full (they log a warning). ButStatsDEmitterapparently silently discards new events when the queue is full (seeNonBlockingStatsDClientcode).KafkaEmitterdiscards new events, but increments "lost events" counts.HttpPostEmitterpacks events in batches and drops the oldest batch when overwhelmed, simultaneously logging that (seeHttpPostEmitter.limitBuffersToEmitSize()andlimitFailedBuffersSize()).I think all emitters should be similar in this regard. Probably event throttling policy should be configurable.
Related to #2868.