Skip to content

Conversation

@jorisvandenbossche
Copy link
Member

@jorisvandenbossche jorisvandenbossche commented Jun 20, 2024

Rationale for this change

Adding tests for the new Buffer properties added in #41685 but now testing that it works out of the box with CUDA.

@jorisvandenbossche
Copy link
Member Author

@github-actions crossbow submit test-cuda-python

@github-actions
Copy link

Revision: 17edaa9

Submitted crossbow builds: ursacomputing/crossbow @ actions-4ecbe58b37

Task Status
test-cuda-python GitHub Actions

assert isinstance(buf.memory_manager, pa.MemoryManager)
assert buf.is_cpu
assert buf.device.is_cpu
assert buf.device == pa.default_cpu_memory_manager().device
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pitrou is it expected that a Buffer that has a CUDA_HOST device type still uses the CPUMemoryManager?

Because for example freeing this buffer is still done by the CudaDeviceManager (but which is a different object from CudaMemoryManager (I assume it predates the MemoryManagers), and it's not entirely clear to me if CudaMemoryManager is solely meant for CUDA memory or also for handling CUDA_HOST

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure, I don't remember what CUDA_HOST is precisely. @kkraus14 @zeroshade Do you have any insight?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a CUDA_HOST buffer is reachable from the CPU using its address, then it should probably have the CPU device, but which memory manager it should have is an open question. Presumably the memory manager that's able to deallocate it?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way, how I envisioned this is that if a given memory area can be accessed both from CPU and from GPU, then it can have different Buffer instances pointing to it. This is what the View and ViewOrCopy APIs are for: they should ideally not force-copy if a transparent view is possible. This is also why it's better to use those APIs than to force-copy the contents when you have a non-CPU Buffer.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a CUDA_HOST buffer is reachable from the CPU using its address, then it should probably have the CPU device,

A CudaHostBuffer is definitely reachable from the CPU, as we have this kind of code in our tests (viewing that buffer as a numpy array):

buf = cuda.new_host_buffer(size)
arr = np.frombuffer(buf, dtype=np.uint8)
# ... manipulate arr

but which memory manager it should have is an open question. Presumably the memory manager that's able to deallocate it?

"The memory allocated by this function (cuMemHostAlloc) must be freed with cuMemFreeHost()" (from the NVIDIA docs), and the CudaHostBuffer::~CudaHostBuffer() deleter uses CudaDeviceManager::FreeHost() (which will indeed call cuMemFreeHost).

But, it's not that the MemoryManager returned from Buffer::memory_manager() is directly used for deallocating the buffer AFAIK (so it might not matter that much in practice).

By the way, how I envisioned this is that if a given memory area can be accessed both from CPU and from GPU, then it can have different Buffer instances pointing to it. This is what the View and ViewOrCopy APIs are for: they should ideally not force-copy if a transparent view is possible. This is also why it's better to use those APIs than to force-copy the contents when you have a non-CPU Buffer.

Yes, thanks for the explanation. For this PR we are not copying/viewing, just checking the attributes of the created host buffer. But in #42223 I am adding bindings for CopyTo, and so that is a good reason I should certainly expose ViewOrCopyTo as well.

And that reminds me that in the PrettyPrinting non-CPU data PR, I used CopyTo which could actually use ViewOrCopyTo. Opened a PR for this -> #43508

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure on the answer to what the device should be, but one thing I would point out is that CUDA pinned host memory (what CUDA_HOST buffer is), follows CUDA stream ordering semantics, so you'll run the risk of running into situations where if you just try to access it as normal CPU memory that you get a race condition.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't that then a problem/risk in general in the Arrow C++ library design? The CudaHostBuffer is a shallow subclass of the main Buffer (actually MutableBuffer) with is_cpu_ set to True (and essentially only overrides the destructor to call cuMemFreeHost).
So when such a CudaHostBuffer object is used with the rest of the Arrow C++ library, it will just be seen as a plain CPU Buffer AFAIU, without any special precaution.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's a problem in general. I would argue the entire is_cpu method and concept is generally broken, but that's obviously a much bigger chunk of work and a can of worms we probably don't want to open now.

@github-actions github-actions bot added awaiting changes Awaiting changes and removed awaiting committer review Awaiting committer review labels Jun 26, 2024
@github-actions github-actions bot added awaiting change review Awaiting change review and removed awaiting changes Awaiting changes labels Aug 8, 2024
@jorisvandenbossche
Copy link
Member Author

#42221 (comment) is an interesting discussion, but for the purpose of this PR I am just going to test the current behaviour (and so we at least have test coverage for accessing those attributes on CUDA), but added a comment pointing to this discussion.

@jorisvandenbossche
Copy link
Member Author

@github-actions crossbow submit test-cuda-python

@github-actions
Copy link

github-actions bot commented Aug 8, 2024

Revision: 44d430a

Submitted crossbow builds: ursacomputing/crossbow @ actions-5b5d0a3c7c

Task Status
test-cuda-python GitHub Actions

@jorisvandenbossche jorisvandenbossche merged commit 4314fd7 into apache:main Aug 9, 2024
@jorisvandenbossche jorisvandenbossche removed the awaiting change review Awaiting change review label Aug 9, 2024
@jorisvandenbossche jorisvandenbossche deleted the gh-41126-cuda-testing branch August 9, 2024 17:40
@conbench-apache-arrow
Copy link

After merging your PR, Conbench analyzed the 4 benchmarking runs that have been run so far on merge-commit 4314fd7.

There were no benchmark performance regressions. 🎉

The full Conbench report has more details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants