Kafka: Fixes needlessly low interpretation of maxRowsInMemory.#5034
Merged
gianm merged 1 commit intoapache:masterfrom Nov 2, 2017
Merged
Kafka: Fixes needlessly low interpretation of maxRowsInMemory.#5034gianm merged 1 commit intoapache:masterfrom
gianm merged 1 commit intoapache:masterfrom
Conversation
AppenderatorImpl already applies maxRowsInMemory across all sinks. So dividing by the number of Kafka partitions is pointless and effectively makes the interpretation of maxRowsInMemory lower than expected. This undoes one of the two changes from apache#3284, which fixed the original bug twice. In this, that's worse than fixing it once.
Contributor
|
👍 |
Contributor
|
Does this need called out in a doc at all? If I'm reading your comment correctly it is the same setting just applied correctly now, which would mean no document change is needed. |
drcrallen
approved these changes
Nov 2, 2017
Contributor
Author
|
I don't think it needs calling out in a doc. It was applied wrongly before and now it's applied correctly. |
Contributor
Author
|
thanks @dclim @drcrallen ! |
gianm
added a commit
to gianm/druid
that referenced
this pull request
Nov 2, 2017
…e#5034) AppenderatorImpl already applies maxRowsInMemory across all sinks. So dividing by the number of Kafka partitions is pointless and effectively makes the interpretation of maxRowsInMemory lower than expected. This undoes one of the two changes from apache#3284, which fixed the original bug twice. In this, that's worse than fixing it once.
jihoonson
pushed a commit
that referenced
this pull request
Nov 3, 2017
#5036) AppenderatorImpl already applies maxRowsInMemory across all sinks. So dividing by the number of Kafka partitions is pointless and effectively makes the interpretation of maxRowsInMemory lower than expected. This undoes one of the two changes from #3284, which fixed the original bug twice. In this, that's worse than fixing it once.
gianm
added a commit
to implydata/druid-public
that referenced
this pull request
Nov 3, 2017
…e#5034) AppenderatorImpl already applies maxRowsInMemory across all sinks. So dividing by the number of Kafka partitions is pointless and effectively makes the interpretation of maxRowsInMemory lower than expected. This undoes one of the two changes from apache#3284, which fixed the original bug twice. In this, that's worse than fixing it once.
gianm
added a commit
to implydata/druid-public
that referenced
this pull request
Nov 8, 2017
…e#5034) AppenderatorImpl already applies maxRowsInMemory across all sinks. So dividing by the number of Kafka partitions is pointless and effectively makes the interpretation of maxRowsInMemory lower than expected. This undoes one of the two changes from apache#3284, which fixed the original bug twice. In this, that's worse than fixing it once.
seoeun25
added a commit
to seoeun25/incubator-druid
that referenced
this pull request
Jan 10, 2020
* Refactoring Appendertor Driver (apache#4292) * Rename FiniteAppenderatorDriver to AppenderatorDriver (apache#4356) * Add totalRowCount to appenderator * add localhost as advertised hostname (apache#4689) * kafkaIndexTask unannounce service in final block (apache#4736) * warn if topic not found (apache#4834) * Kafka: Fixes needlessly low interpretation of maxRowsInMemory. (apache#5034)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
AppenderatorImpl already applies maxRowsInMemory across all sinks. So dividing by
the number of Kafka partitions is pointless and effectively makes the interpretation
of maxRowsInMemory lower than expected.
This undoes one of the two changes from #3284, which fixed the original bug twice.
In this case, that's worse than fixing it once.