Skip to content

Limit parallelism / buffering when writing to a single parquet file in parallel  #7591

@alamb

Description

@alamb

Is your feature request related to a problem or challenge?

When writing to a parquet file in parallel, the implementation in #7562, will potentially buffer parquet data faster than can be written to the final output as there is no back pressure and the intermediate files are all buffered in memory.

As described by @devinjdangelo in #7562 (comment)

I think the best possible solution would consume the sub parquet files incrementally from memory as they are produced, rather than buffering the entire file.

And #7562 (comment)

Ultimately, I'd like to be able to call SerializedRowGroupWriter.append_column as soon as possible -- before any parquet file has been completely serialized in memory. I.e. as a parallel tasks finishes encoding a single column for a single row group, eagerly flush those bytes to the concatenation task, then flush to ObjectStore and discard from memory. If the concatenation task can keep up with all of the parallel serializing tasks, then we could prevent ever buffering an entire row group in memory.

Describe the solution you'd like

I would like to see the output row groups written as they are produced, rather than all buffered and written after the fact, as suggested by @dev

Describe alternatives you've considered

No response

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions