Skip to content

[Doc][Python] Improve documentation regarding dealing with memory mapped files #28401

@asfimport

Description

@asfimport

While one of the Arrow promises is that it makes easy to read/write data bigger than memory, it's not immediately obvious from the pyarrow documentation how to deal with memory mapped files.

The doc hints that you can open files as memory mapped ( https://arrow.apache.org/docs/python/memory.html?highlight=memory_map#on-disk-and-memory-mapped-files ) but then it doesn't explain how to read/write Arrow Arrays or Tables from there.

While most high level functions to read/write formats (pqt, feather, ...) have an easy to guess memory_map=True option, the doc doesn't seem to have any example of how that is meant to work for Arrow format itself. For example how you can do that using RecordBatchFile*

An addition to the memory mapping section that makes a more meaningful example that reads/writes actual arrow data (instead of plain bytes) would probably be helpful

Reporter: Alessandro Molina / @amol-
Assignee: Alessandro Molina / @amol-

PRs and other links:

Note: This issue was originally created as ARROW-12650. Please see the migration documentation for further details.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions