Skip to content

Make asynchronous emitters have the same (configurable?) policy when event queue is full #7057

@leventov

Description

@leventov

This issue narrows the idea of #7037.

There are many emitters (at least: AmbariMetricsEmitter, GraphiteEmitter, StatsDEmitter and KafkaEmitter and HttpPostEmitter, I haven't checked other) that use the same producer-consumer pattern for asynchronous emit: emit() pushes the event to some queue (one of the queues), and there is an asynchronous executor that retrieves events from the queue and sends them over network to emit.

AmbariMetricsEmitter and GraphiteEmitter use the same policy when the queue is full (they log a warning). But StatsDEmitter apparently silently discards new events when the queue is full (see NonBlockingStatsDClient code). KafkaEmitter discards new events, but increments "lost events" counts. HttpPostEmitter packs events in batches and drops the oldest batch when overwhelmed, simultaneously logging that (see HttpPostEmitter.limitBuffersToEmitSize() and limitFailedBuffersSize()).

I think all emitters should be similar in this regard. Probably event throttling policy should be configurable.

Related to #2868.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions