Skip to content

Kafka: Fixes needlessly low interpretation of maxRowsInMemory.#5034

Merged
gianm merged 1 commit intoapache:masterfrom
gianm:appenderator-mrim
Nov 2, 2017
Merged

Kafka: Fixes needlessly low interpretation of maxRowsInMemory.#5034
gianm merged 1 commit intoapache:masterfrom
gianm:appenderator-mrim

Conversation

@gianm
Copy link
Copy Markdown
Contributor

@gianm gianm commented Nov 2, 2017

AppenderatorImpl already applies maxRowsInMemory across all sinks. So dividing by
the number of Kafka partitions is pointless and effectively makes the interpretation
of maxRowsInMemory lower than expected.

This undoes one of the two changes from #3284, which fixed the original bug twice.
In this case, that's worse than fixing it once.

AppenderatorImpl already applies maxRowsInMemory across all sinks. So dividing by
the number of Kafka partitions is pointless and effectively makes the interpretation
of maxRowsInMemory lower than expected.

This undoes one of the two changes from apache#3284, which fixed the original bug twice.
In this, that's worse than fixing it once.
@dclim
Copy link
Copy Markdown
Contributor

dclim commented Nov 2, 2017

👍

@drcrallen
Copy link
Copy Markdown
Contributor

Does this need called out in a doc at all? If I'm reading your comment correctly it is the same setting just applied correctly now, which would mean no document change is needed.

@gianm
Copy link
Copy Markdown
Contributor Author

gianm commented Nov 2, 2017

I don't think it needs calling out in a doc. It was applied wrongly before and now it's applied correctly.

@gianm
Copy link
Copy Markdown
Contributor Author

gianm commented Nov 2, 2017

thanks @dclim @drcrallen !

@gianm gianm merged commit 5da0241 into apache:master Nov 2, 2017
gianm added a commit to gianm/druid that referenced this pull request Nov 2, 2017
…e#5034)

AppenderatorImpl already applies maxRowsInMemory across all sinks. So dividing by
the number of Kafka partitions is pointless and effectively makes the interpretation
of maxRowsInMemory lower than expected.

This undoes one of the two changes from apache#3284, which fixed the original bug twice.
In this, that's worse than fixing it once.
@gianm gianm deleted the appenderator-mrim branch November 2, 2017 19:46
jihoonson pushed a commit that referenced this pull request Nov 3, 2017
#5036)

AppenderatorImpl already applies maxRowsInMemory across all sinks. So dividing by
the number of Kafka partitions is pointless and effectively makes the interpretation
of maxRowsInMemory lower than expected.

This undoes one of the two changes from #3284, which fixed the original bug twice.
In this, that's worse than fixing it once.
gianm added a commit to implydata/druid-public that referenced this pull request Nov 3, 2017
…e#5034)

AppenderatorImpl already applies maxRowsInMemory across all sinks. So dividing by
the number of Kafka partitions is pointless and effectively makes the interpretation
of maxRowsInMemory lower than expected.

This undoes one of the two changes from apache#3284, which fixed the original bug twice.
In this, that's worse than fixing it once.
gianm added a commit to implydata/druid-public that referenced this pull request Nov 8, 2017
…e#5034)

AppenderatorImpl already applies maxRowsInMemory across all sinks. So dividing by
the number of Kafka partitions is pointless and effectively makes the interpretation
of maxRowsInMemory lower than expected.

This undoes one of the two changes from apache#3284, which fixed the original bug twice.
In this, that's worse than fixing it once.
seoeun25 added a commit to seoeun25/incubator-druid that referenced this pull request Jan 10, 2020
* Refactoring Appendertor Driver (apache#4292)

* Rename FiniteAppenderatorDriver to AppenderatorDriver (apache#4356)

* Add totalRowCount to appenderator

* add localhost as advertised hostname (apache#4689)

* kafkaIndexTask unannounce service in final block (apache#4736)

* warn if topic not found (apache#4834)

* Kafka: Fixes needlessly low interpretation of maxRowsInMemory. (apache#5034)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants