-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Description
Is your feature request related to a problem or challenge?
When writing to a parquet file in parallel, the implementation in #7562, will potentially buffer parquet data faster than can be written to the final output as there is no back pressure and the intermediate files are all buffered in memory.
As described by @devinjdangelo in #7562 (comment)
I think the best possible solution would consume the sub parquet files incrementally from memory as they are produced, rather than buffering the entire file.
And #7562 (comment)
Ultimately, I'd like to be able to call SerializedRowGroupWriter.append_column as soon as possible -- before any parquet file has been completely serialized in memory. I.e. as a parallel tasks finishes encoding a single column for a single row group, eagerly flush those bytes to the concatenation task, then flush to ObjectStore and discard from memory. If the concatenation task can keep up with all of the parallel serializing tasks, then we could prevent ever buffering an entire row group in memory.
Describe the solution you'd like
I would like to see the output row groups written as they are produced, rather than all buffered and written after the fact, as suggested by @dev
Describe alternatives you've considered
No response
Additional context
No response