Skip to content

[iceberg] RecordBatch might have logical row mapping on physical arrays #974

@viirya

Description

@viirya

Describe the bug

This is related to #973. After applying the fix, the test we run locally with Iceberg table with deleted rows fails on incorrect query result.

It is because Iceberg doesn't actually delete the rows from the arrays but stores a row mapping in arrays that is used to skip deleted rows when iterating rows in a columnar batch.

However, once we export the underlying Arrow record batch of a columnar batch to native side. The record batch is totally physical without the logical row mapping info. That's said, the deleted rows will occur in the query.

When exporting a record batch from Iceberg record batch, we need to export the row mapping together.

Steps to reproduce

No response

Expected behavior

No response

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions