Conversation
|
bors try |
tryBuild failed: |
|
@jpsamaroo can you unbork AMDGPU on nightly? |
|
In the examples, we could now simplify code like if isa(a, Array)
kernel! = naive_transpose_kernel!(CPU(),4)
else
kernel! = naive_transpose_kernel!(CUDADevice(),256)
endto device = KernelAbstractions.get_device(A)
n = isa(a, AbstractGPUArray) ? 256 : 4
kernel! = naive_transpose_kernel!(device,n)but it would require a dependency (at least for the example) on GPUArrays.jl. I could also add a function device = KernelAbstractions.get_device(A)
n = KernelAbstractions.isgpuarray(A) ? 256 : 4
kernel! = naive_transpose_kernel!(device,n)without additional dependencies. What do you think? |
|
Hm we could also add a |
But what's if the user passes a
My thinking was that the author of |
|
But as for the example - silly me, we can just do device = KernelAbstractions.get_device(A)
n = device isa GPU ? 256 : 4
kernel! = naive_transpose_kernel!(device, n)Very literate, I think. :-) |
|
I changed the examples accordingly. |
|
Looks like bors is still running. :-) |
|
bors try |
tryBuild failed: |
|
@vchuravy , does this design look better to you now? |
vchuravy
left a comment
There was a problem hiding this comment.
I guess so, I am not a fan that this requires us pulling LinearAlgebra and SparseArrays in even though these are currently stdlibs. I would rather have the question of "what is your storage array" be solved somewhere like Adapt.jl
|
bors try |
tryBuild failed: |
Yes, that would be nice - I've also been thinking that it would be nice if Adapt.jl would get a "read/inspect" functionality in addition to the "write/transform" functionality in the future. In the end, the designer of a data structure knows best what "underlying" data should mean (esp. if there are several arrays), so the clean way of doing a recursive "where is this stored?" would be to implement an "inspect" companion to |
|
Is bors hanging? |
|
bors try |
tryBuild failed: |
|
Is bors stalled again? |
|
You can click on the red x next to they "Try #" https://github.com/JuliaGPU/KernelAbstractions.jl/runs/3927630339 |
|
ROCm CI got hung after I updated the local buildkite repo; I've restarted the runners, which are now processing jobs. |
|
Thanks! |
|
I think the remaining test failures are unrelated to this PR (correct?). |
|
I saw JuliaGPU/AMDGPU.jl#187 got merged with a new AMDGPU.jl release. Should we try bors again? |
|
@jpsamaroo , could you kick bors again, with the new AMD fixes? |
|
I saw you made some changes @vchuravy , do I need to do anything in addition? (Am I correct that the current test failures are unrelated to this PR?) |
|
The ROCM failure looks related, only Julia nightly is currently expected to fail. |
|
Thank you Oliver! |
|
Thank you Valentin! |
Closes #268