forked from ggml-org/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 0
Pull requests: rasbid/llama.cpp
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
Prefer VRAM allocations on AMD GCN Vulkan GPUs
codex
#18
opened Oct 13, 2025 by
rasbid
Owner
Loading…
Allow subgroup reductions on stable AMD GCN Vulkan drivers
codex
#16
opened Oct 13, 2025 by
rasbid
Owner
Loading…
Adjust Vulkan submit batching for AMD GCN GPUs
codex
#14
opened Oct 13, 2025 by
rasbid
Owner
Loading…
Tune AMD GCN warptiles for large and small tiles
codex
#11
opened Oct 13, 2025 by
rasbid
Owner
Loading…
Prefer device-local allocations on AMD GCN Vulkan GPUs
codex
#10
opened Oct 13, 2025 by
rasbid
Owner
Loading…
Allow subgroup reductions on stable AMD GCN drivers
codex
#9
opened Oct 13, 2025 by
rasbid
Owner
Loading…
Add AMD GCN Vulkan pipeline subgroup overrides
codex
#8
opened Oct 13, 2025 by
rasbid
Owner
Loading…
Add fallback for AMD shader core count without shader core properties2
codex
#5
opened Oct 12, 2025 by
rasbid
Owner
Loading…
Improve Vulkan AMD GCN detection and build controls
codex
#2
opened Sep 25, 2025 by
rasbid
Owner
Loading…
ProTip!
Mix and match filters to narrow down what you’re looking for.