Is your feature request related to a problem or challenge?
Right now the memory accounting in datafusion appears to be inaccurate. Often times if you limit memory within the application, this isn't honoured very well, and looking at the process in top or similar, you can see it uses way more RSS than you would expect.
Describe the solution you'd like
There are a few things we probably need to solve this:
Benchmark Peak Memory
We should introduce benchmarks for peak memory on a variety of different query types, so that we can ensure we don't go comically over our memory allowances.
There are two ways that I can see this being introduced
I.e,
#[global_allocator]
static ALLOC: dhat::Alloc = dhat::Alloc;
#[test]
fn test() {
let _profiler = dhat::Profiler::builder().testing().build();
// run a 1gb limited query here
let stats = dhat::HeapStats::get();
dhat::assert_eq!(stats.max_bytes, 1024 * 1024 * 1024 * 1.1); // 1GB + 10% overhead
}
Introduce the Claim API
Allow the memory pool in datafusion to use the new pool feature in arrow-rs.
Before this is useful, we need to ensure that we can claim full recordbatches. There are two competing PRs that bring us closer to that goal:
I am biased towards my PR of course, but whatever the arrow-rs team thinks is appropriate
With that in place, we could add:
- A new executor that sits around
DataSourceExec that claims recordbatches as they come in.
- Sort/Spill infra to ensure that when we spill to disk or read from disk, we reclaim the input record batches
- Any other
ExecutionPlan nodes that generate batches, such as row hash streams etc..
Describe alternatives you've considered
No response
Additional context
No response
Is your feature request related to a problem or challenge?
Right now the memory accounting in datafusion appears to be inaccurate. Often times if you limit memory within the application, this isn't honoured very well, and looking at the process in
topor similar, you can see it uses way more RSS than you would expect.Describe the solution you'd like
There are a few things we probably need to solve this:
Benchmark Peak Memory
We should introduce benchmarks for peak memory on a variety of different query types, so that we can ensure we don't go comically over our memory allowances.
There are two ways that I can see this being introduced
codspeedwith--mode memory: https://github.com/CodSpeedHQ/codspeed?tab=readme-ov-file#memory. This would allow us to run a fewdatafusion-clilike queries against clickhouse benchmarks etc..dhat-rswith specific tests for specific queries, ensuring that we don't go above peak memory + some memory slop:I.e,
Introduce the Claim API
Allow the memory pool in datafusion to use the new pool feature in arrow-rs.
Before this is useful, we need to ensure that we can claim full recordbatches. There are two competing PRs that bring us closer to that goal:
claimmethod to recordbatch for memory accounting arrow-rs#9433I am biased towards my PR of course, but whatever the
arrow-rsteam thinks is appropriateWith that in place, we could add:
DataSourceExecthatclaimsrecordbatches as they come in.ExecutionPlannodes that generate batches, such as row hash streams etc..Describe alternatives you've considered
No response
Additional context
No response