Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions python/pyarrow/tests/test_cuda.py
Original file line number Diff line number Diff line change
Expand Up @@ -545,6 +545,28 @@ def put(*args, **kwargs):
put(position=position, nbytes=nbytes)


def test_buffer_device():
buf = cuda.new_host_buffer(10)
assert buf.device_type == pa.DeviceAllocationType.CUDA_HOST
assert isinstance(buf.device, pa.Device)
assert isinstance(buf.memory_manager, pa.MemoryManager)
assert buf.is_cpu
assert buf.device.is_cpu
assert buf.device == pa.default_cpu_memory_manager().device
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pitrou is it expected that a Buffer that has a CUDA_HOST device type still uses the CPUMemoryManager?

Because for example freeing this buffer is still done by the CudaDeviceManager (but which is a different object from CudaMemoryManager (I assume it predates the MemoryManagers), and it's not entirely clear to me if CudaMemoryManager is solely meant for CUDA memory or also for handling CUDA_HOST

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure, I don't remember what CUDA_HOST is precisely. @kkraus14 @zeroshade Do you have any insight?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a CUDA_HOST buffer is reachable from the CPU using its address, then it should probably have the CPU device, but which memory manager it should have is an open question. Presumably the memory manager that's able to deallocate it?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way, how I envisioned this is that if a given memory area can be accessed both from CPU and from GPU, then it can have different Buffer instances pointing to it. This is what the View and ViewOrCopy APIs are for: they should ideally not force-copy if a transparent view is possible. This is also why it's better to use those APIs than to force-copy the contents when you have a non-CPU Buffer.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a CUDA_HOST buffer is reachable from the CPU using its address, then it should probably have the CPU device,

A CudaHostBuffer is definitely reachable from the CPU, as we have this kind of code in our tests (viewing that buffer as a numpy array):

buf = cuda.new_host_buffer(size)
arr = np.frombuffer(buf, dtype=np.uint8)
# ... manipulate arr

but which memory manager it should have is an open question. Presumably the memory manager that's able to deallocate it?

"The memory allocated by this function (cuMemHostAlloc) must be freed with cuMemFreeHost()" (from the NVIDIA docs), and the CudaHostBuffer::~CudaHostBuffer() deleter uses CudaDeviceManager::FreeHost() (which will indeed call cuMemFreeHost).

But, it's not that the MemoryManager returned from Buffer::memory_manager() is directly used for deallocating the buffer AFAIK (so it might not matter that much in practice).

By the way, how I envisioned this is that if a given memory area can be accessed both from CPU and from GPU, then it can have different Buffer instances pointing to it. This is what the View and ViewOrCopy APIs are for: they should ideally not force-copy if a transparent view is possible. This is also why it's better to use those APIs than to force-copy the contents when you have a non-CPU Buffer.

Yes, thanks for the explanation. For this PR we are not copying/viewing, just checking the attributes of the created host buffer. But in #42223 I am adding bindings for CopyTo, and so that is a good reason I should certainly expose ViewOrCopyTo as well.

And that reminds me that in the PrettyPrinting non-CPU data PR, I used CopyTo which could actually use ViewOrCopyTo. Opened a PR for this -> #43508

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure on the answer to what the device should be, but one thing I would point out is that CUDA pinned host memory (what CUDA_HOST buffer is), follows CUDA stream ordering semantics, so you'll run the risk of running into situations where if you just try to access it as normal CPU memory that you get a race condition.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't that then a problem/risk in general in the Arrow C++ library design? The CudaHostBuffer is a shallow subclass of the main Buffer (actually MutableBuffer) with is_cpu_ set to True (and essentially only overrides the destructor to call cuMemFreeHost).
So when such a CudaHostBuffer object is used with the rest of the Arrow C++ library, it will just be seen as a plain CPU Buffer AFAIU, without any special precaution.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's a problem in general. I would argue the entire is_cpu method and concept is generally broken, but that's obviously a much bigger chunk of work and a can of worms we probably don't want to open now.

# it is not entirely clear if CudaHostBuffer should use the default CPU memory
# manager (as it does now), see https://github.com/apache/arrow/pull/42221
assert buf.memory_manager.is_cpu

_, buf = make_random_buffer(size=10, target='device')
assert buf.device_type == pa.DeviceAllocationType.CUDA
assert isinstance(buf.device, pa.Device)
assert buf.device == global_context.memory_manager.device
assert isinstance(buf.memory_manager, pa.MemoryManager)
assert not buf.is_cpu
assert not buf.device.is_cpu
assert not buf.memory_manager.is_cpu


def test_BufferWriter():
def allocate(size):
cbuf = global_context.new_buffer(size)
Expand Down