[WIP]Use DefaultBlockingPool for Global Processing Pool instead of StupidPool that can allocate arbitrary number of buffers and cause crashes.#5345
Conversation
…ool that can allocate arbitrary number of buffers and cause crashes.
| aggBuffer = lastBuffer; | ||
| } else { | ||
| ResourceHolder<ByteBuffer> bb = bufferPool.take(); | ||
| ResourceHolder<ByteBuffer> bb = bufferPool.takeOrFailOnTimeout(60000); |
There was a problem hiding this comment.
Is this competing with the processing pool? that means you can choke out processing threads on accident while incremental indexing is going on.
There was a problem hiding this comment.
yes, that would be true.
i think it is more of a limitation of current OffheapIncrementalIndex implementation which can not work with fixed amount of resources and would keep on allocating more and more buffers. also, this impl keeps dimensions etc on-heap so doesn't really serve purpose of being off-heap ... things become too slow if dimensions are pushed off-heap due to repeated serde .
FWIW, for above reasons, no-one actually uses current OffheapIncrementalIndex implementation and we can possibly remove it.
There was a problem hiding this comment.
I'm considering the removal of OffheapIncrementalIndex in this patch.
There was a problem hiding this comment.
also it was only ever used by GroupBy-v1 implementation if explicitly configured .. it was not possible to use OffheapIncrementalIndex for indexing actually.
| //check that stupid pool gives buffers that can hold at least one row's aggregators | ||
| ResourceHolder<ByteBuffer> bb = bufferPool.take(); | ||
| //check that buffer pool gives buffers that can hold at least one row's aggregators | ||
| ResourceHolder<ByteBuffer> bb = bufferPool.takeOrFailOnTimeout(60000); |
There was a problem hiding this comment.
This can also choke the processing pool, right?
88a457a to
a5ecf61
Compare
| * | ||
| * @return a resource, or throw RuntimeException on timeout. | ||
| */ | ||
| default ReferenceCountingResourceHolder<T> takeOrFailOnTimeout(long timeoutMs) |
There was a problem hiding this comment.
Should this throw a checked Timeout exception of some kind?
| new OffheapBufferGenerator("intermediate processing", config.intermediateComputeSizeBytes()), | ||
| config.getNumThreads(), | ||
| config.poolCacheMaxCount() | ||
| config.getNumThreads() |
There was a problem hiding this comment.
does this mean poolCacheMaxCount needs to be removed from docs?
|
Hi guys, I don't think this should be a blocking issue for 0.13.0 release. I'll remove the milestone. |
|
This pull request has been marked as stale due to 60 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the dev@druid.apache.org list. Thank you for your contributions. |
|
This pull request has been closed due to lack of activity. If you think that is incorrect, or the pull request requires review, you can revive the PR at any time. |
Fixes #5319
also following
OffheapIncrementalIndex.java( see [WIP]Use DefaultBlockingPool for Global Processing Pool instead of StupidPool that can allocate arbitrary number of buffers and cause crashes. #5345 (comment)ParallelCombinernow uses a merge buffer rather than a processing buffer ( see Add streaming aggregation as the last step of ConcurrentGrouper if data are spilled #4704 (review) )TODO: check if hardcoded timeout of 1 minute in all places is OK.