Optimize Vulkan buffer transfers on UMA (Unified Memory Architecture) devices#22462
Optimize Vulkan buffer transfers on UMA (Unified Memory Architecture) devices#22462winstonma wants to merge 2 commits intoggml-org:masterfrom
Conversation
|
Hi @winstonma, thanks for your contribution! Per our contribution guidelines, the automated PR checker found the following issue(s) that need your attention:
Please note that maintainers reserve the right to make final decisions on PRs. If you believe there is a mistake, please comment below. |
|
So how can I get this PR reviewed? Thanks |
|
I have ryzen-7-5825u with Vega 8. I am seeing almost 200% packet processing increase. Thank you very much. Master with #22462 #22455 and #21751 merged. bash ./llama-bench -m /home/tipu/AI/models/unsloth/Qwen3-Coder-Next/Qwen3-Coder-Next-UD-Q5_K_S-00001-of-00003.gguf -m /home/tipu/AI/models/unsloth/Qwen36-35-A3B/Qwen36-35B-A3B-Q8.gguf -ngl 100 --ubatch-size 1088 --batch-size 2048 --mmap 0 -fa 1 -d 0,8096 -r 3
build: 4e522bfe4 (8961) Original Master: bash ./llama-bench -m /home/tipu/AI/models/unsloth/Qwen3-Coder-Next/Qwen3-Coder-Next-UD-Q5_K_S-00001-of-00003.gguf -m /home/tipu/AI/models/unsloth/Qwen36-35-A3B/Qwen36-35B-A3B-Q8.gguf -ngl 100 --ubatch-size 1088 --batch-size 2048 --mmap 0 -fa 1 -d 0,8096 -r 3
build: b1a5bd4 (8938) |
|
Okie I will take a look I just ran llama-bench and didn't ran llama-cli to check the output |
|
@engrtipusultan I ran the LLM model but I couldn't repeat what you saw.
Did you see any good result after reverting only this commit? Here is the llama-bench result on my machine: Using version 8966: With this PR: |
e95b92d to
da5e315
Compare
|
Yes reverting to latest master resolves the issue. So it is one of your two PRs that caused it. I checked on llama-server like shown in screenshots. |
|
If you want, tomorrow, I can check both PRs one by one |
Adds a configurable threshold via env var: GGML_VK_UMA_NON_CACHED_DIRECT_READ_THRESHOLD (default now 512 * 1024). Introduces ggml_vk_uma_non_cached_direct_read_threshold() to parse/cache that env var once, with validation and warning logs on invalid/overflow values. Introduces ggml_vk_use_uma_direct_read(vk_buffer &, size_t) to centralize the direct-read decision logic. Replaces duplicated inline heuristics in three read paths with the shared helper: - ggml_vk_buffer_read_2d_async() - ggml_vk_buffer_read() - ggml_backend_vk_get_tensor_async() Keeps the small non-cached UMA async behavior explicit: if direct read is not preferred and sync staging is unavailable, it returns false so caller falls back. Adds needed headers for parsing/error handling: <cstdlib> and <cerrno>.
|
@engrtipusultan I just updated the PR code. Could you please see if it break on your side? From the performance perspective I don't see a huge difference on the pp and tg performance. I would consider this as a micro-optimization for UMA device. |
|
This PR (#22462) Current master (commit 5d56eff) |
|
@winstonma the output is good: etc. I will try the other commit next! Thank you for these PRs. |
|
Good to hear the results. Actually I start seeing benchmark improvement only when both PR are merged together. Apart front these two commits AI also identified serveral smaller optimization for UMA vulkan path so I will implement, test and create PR if benchmark show promising result. |
| if (dst->device->uma && (dst->memory_property_flags & vk::MemoryPropertyFlagBits::eHostVisible)) { | ||
| GGML_ASSERT(dst->memory_property_flags & vk::MemoryPropertyFlagBits::eHostCoherent); | ||
| if (width == spitch) { | ||
| deferred_memcpy((uint8_t *) dst->ptr + offset, src, width * height, &subctx->in_memcpys); |
There was a problem hiding this comment.
I don't think this is correct for the same reasons I commented in #20018. The async copies need to run on the queue to stay in order with other commands.
There was a problem hiding this comment.
Thanks for the review. I am not familiar with these. I asked Codex to write a test case to verify the the async copies and passes the test case. And here is the follow up question that I asked:
Yes, the code is implemented to stay ordered with other backend work.
- In the UMA host-visible branch at
if (dst->device->uma && (dst->memory_property_flags & vk::MemoryPropertyFlagBits::eHostVisible)), the copy is not executed immediately. It is queued via deferred_memcpy into subctx in_memcpys.- Those queued host writes are flushed only when the context is submitted, in
ggml_vk_run_deferred_uploadsandggml_vk_submit_transfer_ctx.- For compute-path submission, deferred uploads are run right before submit in
ggml_vk_run_deferred_uploads(compute_ctx);. For transfer-path submission, same behavior is inggml_vk_run_deferred_uploads(cpy_ctx);.- The async tensor API routes into this path from
ggml_backend_vk_set_tensor_async, so these copies participate in the same submission/sync chain as other backend commands.- If transfer queue is enabled, cross-queue ordering is linked by timeline semaphore signal/wait in
ctx->transfer_semaphore.value++;, andresult->s->wait_semaphores.push_back(ctx->transfer_semaphore);.So for the code specifically, ordering is preserved because writes are deferred and then flushed at queue-submit boundaries, not applied out-of-band.
There was a problem hiding this comment.
You need to be familiar with it. Copy-pasting AI responses into maintainer questions is not allowed because we do not have time or patience to debate an AI that can make up wrong claims way faster than any human could debunk them.
There was a problem hiding this comment.
Frankly I'm not quite sure I follow the question. But I tried to add some log and see if the question is being answered. This is the debug log:
❯ ./build-vk-debug/bin/llama-cli -m ~/model/gemma-4-E4B-it-UD-Q4_K_XL.gguf -p "Hello" -n 16 2>&1 | grep VK_TIMELINE_HANDSHAKE
Loading model... |VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=1 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=1 last_waited=0 source=ggml_vk_synchronize
VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=2 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=2 last_waited=1 source=ggml_vk_synchronize \VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=3 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=3 last_waited=2 source=ggml_vk_synchronize
VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=4 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=4 last_waited=3 source=ggml_vk_synchronize
▄▄ ▄▄
██ ██
██ ██ ▀▀█▄ ███▄███▄ ▀▀█▄ ▄████ ████▄ ████▄
██ ██ ▄█▀██ ██ ██ ██ ▄█▀██ ██ ██ ██ ██ ██
██ ██ ▀█▄██ ██ ██ ██ ▀█▄██ ██ ▀████ ████▀ ████▀
██ ██
▀▀ ▀▀
build : b8960-fe1eb0302
model : gemma-4-E4B-it-UD-Q4_K_XL.gguf
modalities : text
available commands:
/exit or Ctrl+C stop or exit
/regen regenerate the last response
/clear clear the chat history
/read <file> add a text file
/glob <pattern> add text files using globbing pattern
> Hello
|VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=5 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=5 last_waited=4 source=ggml_vk_synchronize
VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=6 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=6 last_waited=5 source=ggml_vk_synchronize -VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=7 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=7 last_waited=6 source=ggml_vk_synchronize
VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=8 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=8 last_waited=7 source=ggml_vk_synchronize HelloVK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=9 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=9 last_waited=8 source=ggml_vk_synchronize
VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=10 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=10 last_waited=9 source=ggml_vk_synchronize
!VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=11 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=11 last_waited=10 source=ggml_vk_synchronize
VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=12 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=12 last_waited=11 source=ggml_vk_synchronize
HowVK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=13 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=13 last_waited=12 source=ggml_vk_synchronize
VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=14 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=14 last_waited=13 source=ggml_vk_synchronize
canVK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=15 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=15 last_waited=14 source=ggml_vk_synchronize
VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=16 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=16 last_waited=15 source=ggml_vk_synchronize
IVK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=17 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=17 last_waited=16 source=ggml_vk_synchronize
VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=18 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=18 last_waited=17 source=ggml_vk_synchronize
helpVK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=19 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=19 last_waited=18 source=ggml_vk_synchronize
VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=20 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=20 last_waited=19 source=ggml_vk_synchronize
youVK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=21 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=21 last_waited=20 source=ggml_vk_synchronize
VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=22 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=22 last_waited=21 source=ggml_vk_synchronize
todayVK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=23 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=23 last_waited=22 source=ggml_vk_synchronize
VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=24 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=24 last_waited=23 source=ggml_vk_synchronize
?VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=25 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=25 last_waited=24 source=ggml_vk_synchronize
VK_TIMELINE_HANDSHAKE SIGNAL TQ->CQ: signal_value=26 source=ggml_vk_submit_transfer_ctx
VK_TIMELINE_HANDSHAKE WAIT_SUBMIT CQ<-TQ: wait_value=26 last_waited=25 source=ggml_vk_synchronize
[ Prompt: 71.1 t/s | Generation: 18.2 t/s ]
According to the log, the Vulkan Timeline Semaphore have created a system where the Compute Queue is physically incapable of outrunning the data being moved by the Transfer Queue. Thus the ordering is maintained. Also, the Compute Queue is hardware-blocked (bound by a Vulkan Timeline Semaphore wait operation) until the Transfer Queue signals completion, there is no risk of the GPU reading "stale" or partially written memory.
Disabling Transfer Queue on AMD UMA
I also submitted another PR to disable to the transfer queue on the AMD UMA. If the Transfer Queue is disabled, the code path would naturally fall back to a single-queue model where all operations are submitted to the Compute Queue. In this scenario, ordering is maintained by default due to the sequential nature of command submission within a single Vulkan queue.
There was a problem hiding this comment.
Regardless of the transfer queue or compute queue, ordering is maintained for commands you submit to the queue. That does not apply to deferred memcpys. in_memcpys run on queue submission. out_memcpys run (in specific cases) after a fence wait that makes sure all queue commands are done. This will not work with the backend async read/write functions because those assume that the commands run in the right order in the queue.
It may work in your tests because you get lucky and the order works out, but this is not guaranteed. This change is fundamentally unsafe.


Overview
This PR optimizes Vulkan buffer transfers on UMA (Unified Memory Architecture) devices by bypassing GPU staging buffers when possible and using direct CPU memory access instead. The changes target situations where GPU and CPU memory are physically the same, making direct copies more efficient.
Additional information
This is the ran benchmark result:
Requirements