Skip to content

Pull requests: rasbid/llama.cpp

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Reviews
Assignee
Filter by who’s assigned
Assigned to nobody Loading
Sort

Pull requests list

Prefer VRAM allocations on AMD GCN Vulkan GPUs codex
#18 opened Oct 13, 2025 by rasbid Owner Loading…
Tune AMD GCN Vulkan warptiles codex
#17 opened Oct 13, 2025 by rasbid Owner Loading…
Tune AMD GCN Vulkan subgroup sizing codex
#15 opened Oct 13, 2025 by rasbid Owner Loading…
Adjust Vulkan submit batching for AMD GCN GPUs codex
#14 opened Oct 13, 2025 by rasbid Owner Loading…
Clamp Vulkan DMMV workgroup sizes on AMD GCN codex
#13 opened Oct 13, 2025 by rasbid Owner Loading…
Enable subgroup float reductions on AMD GCN codex
#12 opened Oct 13, 2025 by rasbid Owner Loading…
Tune AMD GCN warptiles for large and small tiles codex
#11 opened Oct 13, 2025 by rasbid Owner Loading…
Prefer device-local allocations on AMD GCN Vulkan GPUs codex
#10 opened Oct 13, 2025 by rasbid Owner Loading…
Allow subgroup reductions on stable AMD GCN drivers codex
#9 opened Oct 13, 2025 by rasbid Owner Loading…
Add AMD GCN Vulkan pipeline subgroup overrides codex
#8 opened Oct 13, 2025 by rasbid Owner Loading…
Enable AMD GCN subgroup DMMV pipelines codex
#7 opened Oct 12, 2025 by rasbid Owner Loading…
Tune Vulkan matmul tiles for AMD GCN wave64 codex
#6 opened Oct 12, 2025 by rasbid Owner Loading…
Improve AMD GCN detection fallback codex
#4 opened Oct 12, 2025 by rasbid Owner Loading…
Add Vulkan matmul profiling instrumentation codex
#3 opened Oct 7, 2025 by rasbid Owner Loading…
Improve Vulkan AMD GCN detection and build controls codex
#2 opened Sep 25, 2025 by rasbid Owner Loading…
Add detailed startup logging for llama-cli codex
#1 opened Sep 23, 2025 by rasbid Owner Loading…
ProTip! Mix and match filters to narrow down what you’re looking for.