-
Notifications
You must be signed in to change notification settings - Fork 4k
GH-41126: [Python] Test Buffer device/device_type access on CUDA #42221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GH-41126: [Python] Test Buffer device/device_type access on CUDA #42221
Conversation
|
@github-actions crossbow submit test-cuda-python |
|
Revision: 17edaa9 Submitted crossbow builds: ursacomputing/crossbow @ actions-4ecbe58b37
|
| assert isinstance(buf.memory_manager, pa.MemoryManager) | ||
| assert buf.is_cpu | ||
| assert buf.device.is_cpu | ||
| assert buf.device == pa.default_cpu_memory_manager().device |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pitrou is it expected that a Buffer that has a CUDA_HOST device type still uses the CPUMemoryManager?
Because for example freeing this buffer is still done by the CudaDeviceManager (but which is a different object from CudaMemoryManager (I assume it predates the MemoryManagers), and it's not entirely clear to me if CudaMemoryManager is solely meant for CUDA memory or also for handling CUDA_HOST
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure, I don't remember what CUDA_HOST is precisely. @kkraus14 @zeroshade Do you have any insight?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a CUDA_HOST buffer is reachable from the CPU using its address, then it should probably have the CPU device, but which memory manager it should have is an open question. Presumably the memory manager that's able to deallocate it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By the way, how I envisioned this is that if a given memory area can be accessed both from CPU and from GPU, then it can have different Buffer instances pointing to it. This is what the View and ViewOrCopy APIs are for: they should ideally not force-copy if a transparent view is possible. This is also why it's better to use those APIs than to force-copy the contents when you have a non-CPU Buffer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a CUDA_HOST buffer is reachable from the CPU using its address, then it should probably have the CPU device,
A CudaHostBuffer is definitely reachable from the CPU, as we have this kind of code in our tests (viewing that buffer as a numpy array):
buf = cuda.new_host_buffer(size)
arr = np.frombuffer(buf, dtype=np.uint8)
# ... manipulate arrbut which memory manager it should have is an open question. Presumably the memory manager that's able to deallocate it?
"The memory allocated by this function (cuMemHostAlloc) must be freed with cuMemFreeHost()" (from the NVIDIA docs), and the CudaHostBuffer::~CudaHostBuffer() deleter uses CudaDeviceManager::FreeHost() (which will indeed call cuMemFreeHost).
But, it's not that the MemoryManager returned from Buffer::memory_manager() is directly used for deallocating the buffer AFAIK (so it might not matter that much in practice).
By the way, how I envisioned this is that if a given memory area can be accessed both from CPU and from GPU, then it can have different Buffer instances pointing to it. This is what the
ViewandViewOrCopyAPIs are for: they should ideally not force-copy if a transparent view is possible. This is also why it's better to use those APIs than to force-copy the contents when you have a non-CPU Buffer.
Yes, thanks for the explanation. For this PR we are not copying/viewing, just checking the attributes of the created host buffer. But in #42223 I am adding bindings for CopyTo, and so that is a good reason I should certainly expose ViewOrCopyTo as well.
And that reminds me that in the PrettyPrinting non-CPU data PR, I used CopyTo which could actually use ViewOrCopyTo. Opened a PR for this -> #43508
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure on the answer to what the device should be, but one thing I would point out is that CUDA pinned host memory (what CUDA_HOST buffer is), follows CUDA stream ordering semantics, so you'll run the risk of running into situations where if you just try to access it as normal CPU memory that you get a race condition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't that then a problem/risk in general in the Arrow C++ library design? The CudaHostBuffer is a shallow subclass of the main Buffer (actually MutableBuffer) with is_cpu_ set to True (and essentially only overrides the destructor to call cuMemFreeHost).
So when such a CudaHostBuffer object is used with the rest of the Arrow C++ library, it will just be seen as a plain CPU Buffer AFAIU, without any special precaution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it's a problem in general. I would argue the entire is_cpu method and concept is generally broken, but that's obviously a much bigger chunk of work and a can of worms we probably don't want to open now.
|
#42221 (comment) is an interesting discussion, but for the purpose of this PR I am just going to test the current behaviour (and so we at least have test coverage for accessing those attributes on CUDA), but added a comment pointing to this discussion. |
|
@github-actions crossbow submit test-cuda-python |
|
Revision: 44d430a Submitted crossbow builds: ursacomputing/crossbow @ actions-5b5d0a3c7c
|
|
After merging your PR, Conbench analyzed the 4 benchmarking runs that have been run so far on merge-commit 4314fd7. There were no benchmark performance regressions. 🎉 The full Conbench report has more details. |
Rationale for this change
Adding tests for the new Buffer properties added in #41685 but now testing that it works out of the box with CUDA.