KAFKA-14222; KRaft's memory pool should always allocate a buffer#12625
KAFKA-14222; KRaft's memory pool should always allocate a buffer#12625jsancio merged 3 commits intoapache:trunkfrom
Conversation
Because the snapshot writer sets a linger ms of Integer.MAX_VALUE it is possible for the memory pool to run out of memory if the snapshot is greater than 5 * 8MB. This change allows the BatchMemoryPool to always allocate a buffer when requested. The memory pool frees the extra allocated buffer when released if the number of pooled buffers is greater than the configured maximum batches.
| free.offer(previouslyAllocated); | ||
| // Free the buffer if the number of pooled buffers is already the maximum number of batches. | ||
| // Otherwise return the buffer to the memory pool. | ||
| if (free.size() >= maxBatches) { |
There was a problem hiding this comment.
Perhaps we should rename maxBatches since it is no longer serving as a max. How about maxRetainedBatches or something like that since it is still a bound on the number of batches which the pool can hold onto indefinitely.
There was a problem hiding this comment.
Yes. Fixed the variable name.
| assertThrows(IllegalArgumentException.class, () -> pool.release(buffer)); | ||
| } | ||
|
|
||
| private ByteBuffer touch(ByteBuffer buffer) { |
There was a problem hiding this comment.
nit: touch seems a little vague. I think we're just trying to simulate some buffer usage?
There was a problem hiding this comment.
Renamed the function to update.
| } finally { | ||
| lock.unlock(); | ||
| } | ||
| return Integer.MAX_VALUE; |
There was a problem hiding this comment.
2 billion bytes is 2GB? Is that enough?
There was a problem hiding this comment.
Yes. This should be Long.MAX_VALUE.
hachikuji
left a comment
There was a problem hiding this comment.
LGTM. Just one comment about the javadoc.
| @@ -29,15 +29,15 @@ | |||
| public class BatchMemoryPool implements MemoryPool { | |||
There was a problem hiding this comment.
Could we update the javadoc above to match current behavior?
|
Thanks @hachikuji . Merging -- the failures look unrelated. |
) Because the snapshot writer sets a linger ms of Integer.MAX_VALUE it is possible for the memory pool to run out of memory if the snapshot is greater than 5 * 8MB. This change allows the BatchMemoryPool to always allocate a buffer when requested. The memory pool frees the extra allocated buffer when released if the number of pooled buffers is greater than the configured maximum batches. Reviewers: Jason Gustafson <jason@confluent.io>
…eptember 2022) `Jenkinsfile` was the only conflict and we ignore the changes since they are not relevant to the Confluent build. * apache-github/3.3: (61 commits) KAFKA-14214: Introduce read-write lock to StandardAuthorizer for consistent ACL reads. (apache#12628) KAFKA-14243: Temporarily disable unsafe downgrade (apache#12664) KAFKA-14240; Validate kraft snapshot state on startup (apache#12653) KAFKA-14233: disable testReloadUpdatedFilesWithoutConfigChange first to fix the build (apache#12658) KAFKA-14238; KRaft metadata log should not delete segment past the latest snapshot (apache#12655) KAFKA-14156: Built-in partitioner may create suboptimal batches (apache#12570) MINOR: Adds KRaft versions of most streams system tests (apache#12458) MINOR; Add missing li end tag (apache#12640) MINOR: Mention that kraft is production ready in upgrade notes (apache#12635) MINOR: Add upgrade note regarding the Strictly Uniform Sticky Partitioner (KIP-794) (apache#12630) KAFKA-14222; KRaft's memory pool should always allocate a buffer (apache#12625) KAFKA-14208; Do not raise wakeup in consumer during asynchronous offset commits (apache#12626) KAFKA-14196; Do not continue fetching partitions awaiting auto-commit prior to revocation (apache#12603) KAFKA-14215; Ensure forwarded requests are applied to broker request quota (apache#12624) MINOR; Remove end html tag from upgrade (apache#12605) Remove the html end tag from upgrade.html KAFKA-14205; Document how to replace the disk for the KRaft Controller (apache#12597) KAFKA-14203 Disable snapshot generation on broker after metadata errors (apache#12596) KAFKA-14216: Remove ZK reference from org.apache.kafka.server.quota.ClientQuotaCallback javadoc (apache#12617) KAFKA-14217: app-reset-tool.html should not show --zookeeper flag that no longer exists (apache#12618) ...
Because the snapshot writer sets a linger ms of Integer.MAX_VALUE it is possible for the memory pool to run out of memory if the snapshot is greater than 5 * 8MB.
This change allows the BatchMemoryPool to always allocate a buffer when requested. The memory pool frees the extra allocated buffer when released if the number of pooled buffers is greater than the configured maximum batches.
Committer Checklist (excluded from commit message)