ggml : add CPU TurboQuant KV cache types (TBQ3_0 / TBQ4_0)#21089
ggml : add CPU TurboQuant KV cache types (TBQ3_0 / TBQ4_0)#21089elusznik wants to merge 3 commits intoggml-org:masterfrom
Conversation
|
Hi @elusznik, thanks for your contribution! Per our contribution guidelines, the automated PR checker found the following issue(s) that need your attention:
Please note that maintainers reserve the right to make final decisions on PRs. If you believe there is a mistake, please comment below. |
|
Issue #20977 |
There was a problem hiding this comment.
Pull request overview
This PR introduces two new CPU-only TurboQuant KV-cache ggml types (tbq3_0, tbq4_0) and wires them through ggml’s type system, CPU quantize/dequantize + vec_dot, llama KV/graph handling, tooling, and tests so they can be selected as KV cache formats and consumed by CPU flash-attention.
Changes:
- Add
GGML_TYPE_TBQ3_0/GGML_TYPE_TBQ4_0(plus ftype plumbing) with block layouts, quantize/dequantize, and CPUvec_dotsupport. - Update llama KV-cache views + attention graph to handle TBQ tensors (cast + reshape for attention).
- Expose types in CLI/tools docs and add regression/tests coverage (
test-quantize-fns,test-backend-ops).
Reviewed changes
Copilot reviewed 26 out of 26 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| tools/server/README.md | Document tbq3_0/tbq4_0 as allowed KV cache types. |
| tools/quantize/quantize.cpp | Add quantize tool options for TBQ ftypes. |
| tools/llama-bench/llama-bench.cpp | Allow parsing tbq3_0/tbq4_0 type names. |
| tools/completion/README.md | Document TBQ cache types for completion tool args. |
| tools/cli/README.md | Document TBQ cache types for CLI args. |
| tests/test-quantize-fns.cpp | Add TBQ dispatch + table/codebook checks and error thresholds. |
| tests/test-backend-ops.cpp | Add backend-op coverage for TBQ in GET_ROWS/SET_ROWS/CPY/FLASH_ATTN_EXT. |
| src/llama-quant.cpp | Add TBQ ftype/type mapping + fallback behavior. |
| src/llama-kv-cache.cpp | Add TBQ-specific KV views (3D) for K/V retrieval. |
| src/llama-graph.cpp | Cast+reshape TBQ KV tensors to feed flash/non-flash attention. |
| include/llama.h | Add llama ftype enum entries for TBQ. |
| ggml/src/ggml.c | Register TBQ type traits, ftype mapping, and quantize chunk dispatch. |
| ggml/src/ggml-turboq.h | New TurboQuant helper API header. |
| ggml/src/ggml-turboq.c | New TurboQuant helpers + TBQ3/TBQ4 quantize/dequantize implementations. |
| ggml/src/ggml-turboq-tables.h | New TurboQuant Lloyd-Max codebooks/boundaries. |
| ggml/src/ggml-quants.h | Declare TBQ quantize/dequantize entry points. |
| ggml/src/ggml-quants.c | Add row-data validation for TBQ blocks. |
| ggml/src/ggml-cpu/quants.h | Add CPU quantize + vec_dot declarations for TBQ. |
| ggml/src/ggml-cpu/quants.c | Add CPU quantize wrappers and TBQ vec_dot (dequantize-then-dot) fallback. |
| ggml/src/ggml-cpu/ops.cpp | Extend DUP handling for quantized->F16/BF16 and adjust quantized dup flow. |
| ggml/src/ggml-cpu/ggml-cpu.c | Register TBQ CPU type traits (from_float, vec_dot, vec_dot_type). |
| ggml/src/ggml-cpu/arch-fallback.h | Add tbq vec_dot fallback renames for some architectures. |
| ggml/src/ggml-common.h | Define block_tbq3_0 / block_tbq4_0 layouts. |
| ggml/src/CMakeLists.txt | Build and install TurboQuant sources/headers into ggml-base. |
| ggml/include/ggml.h | Add new ggml type + ftype enum values. |
| common/arg.cpp | Allow TBQ types in --cache-type-k/--cache-type-v parsing and help text. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
I've been working on extending unixsysdev's tq3_0 implementation with V cache support and flash attention. Repo here: https://github.com/animehacker/llama-turboquant What this adds on top of unixsysdev's work:
Tested on Llama-3.3-70B-Instruct-Q4_K_M, 2x RTX 3090:
To be clear: this implements PolarQuant (Stage 1) only — WHT rotation + 3-bit Lloyd-Max. QJL residual correction is not included. Paper with implementation details: https://oliverchurch.com/turboquant-for-ggml-achieving-4.57x-kv-cache-compression-in-llama.cpp.html |
|
I've been working on a TurboQuant implementation in llama.cpp's GGML framework (CUDA backend, tested on Llama-3.3-70B with 2x RTX 3090s). A few findings that might be useful for the vLLM implementation:
Paper with implementation details: https://oliverchurch.com/turboquant-for-ggml-achieving-4.57x-kv-cache-compression-in-llama.cpp.html Happy to compare notes. |
|
Addressed the actionable points from the Copilot review in
Re-ran:
|
|
Hi @elusznik — great work on this PR. I've been running the Build fix:
|
| KV type | pp t/s (generic C) | pp t/s (NEON) | delta | compression |
|---|---|---|---|---|
| f16 | 312 | 306 | — | 1.0× |
| q4_0 | 307 | 291 | — | 3.6× |
| tbq4_0 | 258 | 276 | +18 t/s (+7%) | 3.9× |
| tbq3_0 | 253 | 274 | +21 t/s (+8%) | 5.2× |
The gap to q4_0 narrows from ~50 t/s → ~16 t/s after NEON. The residual cost is the 128×128 Hadamard rotation matmuls (2 dense matmuls per 256-element block per TBQ block) — closing that fully would require a structured butterfly/WHT transform at the quantization algorithm level, not a kernel change.
Apple Silicon note
Running with -ngl 99 -ctk tbq4_0 (Metal KV offload) crashes — Metal backend does not support SET_ROWS for TBQ types. -nkvo 1 is the workaround (KV stays on CPU, model layers on Metal). Not a blocker for this PR scope, just worth noting for anyone testing on Apple Silicon.
Happy to submit a follow-up PR with these changes once this lands — or fold them in here if you prefer. Let me know what works best.
|
Hello @CuriosityQuantified, thanks for your input. Unfortunately I do not have ARM64 experience so I couldn't really do it myself. When it comes to the PR, I think a separate request would be more in line with the contribution guidelines of this project |
Based on PR ggml-org#21089 (CPU TurboQuant by elusznik), this adds CUDA kernel support for the TBQ3_0 and TBQ4_0 KV cache quantization types. New files: - turboq.cu: GPU rotation matrix init, CUDA dequantize/quantize kernels - 128 threads/block, shared memory for codebook decode - O(d²) rotation matvec per block via global memory - turboq.cuh: Kernel declarations Modified files: - set-rows.cu: Custom TBQ quantize dispatch - convert.cu: TBQ→F32/F16 row dequantize - cpy.cu: TBQ→F32/F16 copy (enables GPU-side ggml_cast in attention) - ggml-cuda.cu: TBQ in SET_ROWS + CPY capability checks - arch-fallback.h: ARM build fix (missing TBQ vec_dot macros) - CMakeLists.txt: turboq.cu added to build Key fix: Adding TBQ types to GGML_OP_CPY capability check enables the existing ggml_cast() dequantize path in llama-graph.cpp to run on GPU, improving generation from 2 → 9.5 tok/s (Llama 3B, GTX 1660 Super). Benchmark (Llama 3.2 3B Q4_K_M, GTX 1660 Super 6GB): - Prefill: 308 tok/s (4x baseline 75 tok/s) - Generation: 9.5 tok/s (22% of baseline 42 tok/s) - Max context: ~98K tokens (2x baseline ~49K) The O(d²) rotation in dequantize remains the generation bottleneck. Fused flash attention kernels would eliminate this overhead.
|
@elusznik — understood, separate PR it is. I've opened it against your fork: elusznik#1 It's scoped to just the two changes from my comment above — the |
|
I have a working CUDA flash attention implementation for TBQ4_0 and TBQ3_0 on DGX Spark (GB10, SM121) and would be interested in contributing it as a follow-on once the CPU types land. Quick summary of what I have:
My current implementation uses QK=128 blocks (different from the QK_K=256 here), so I'd need to adapt for 256-element blocks with 2 rotation sub-groups. Happy to coordinate on that. Branch for reference: https://github.com/mihai-chiorean/turbo3-cuda/tree/feat/tbq4-cuda-fa-sm121 |
|
@ggerganov I think this one is worth a look - good PR, keeps core changes minimal, CPU only, specialized code in separate files. There will be pressure to adopt the TQ3 quants now that Gemma 4 is relatively context hungry 😃 |
|
@pwilkin I don't know - it looks like pure slop to me. What makes you think it is worth the look from the presented results? |
@ggerganov I don't want to step out of line, but how is it pure slop? I put in some serious work and the result of 3.94x memory reduction with closer matching top p than q4_0 is something to be discarded?
|
So, half the speed, but negligible quality difference? Is it even against latest |
the speed is obviously reduced due to the matrix multiplication operations being done on CPU. as per the contribution rules, initial commits of a given features are to be implemented CPU-only. it was against the master at the time of submitting the PR (last Sunday) |
Try latest |
so resubmit the PR today and close this one? |
No, I mean do the tests again on latest |
|
Master now already applies rotations on the KV cache before quantization, improving the quality of all the regular quantization types. And since TurboQuant is barely any better than the regular quants without rotation, latest master might actually show better performance with the regular quants. Worth testing. I don't think you will have to close this PR. If TurboQuant still shows improvements you could simply rebase this PR on latest master and solve merge conflicts (if any). |
|
@ggerganov I think it's worth looking at contributions that adhere to the contribution standards, something that to my knowledge only this PR out of all the TurboQuant PRs has done. While the results right now might be a bit underwhelming, they're (a) close to mainline (so possibly can be improved) and (b) even if the TQ4 quant is not worth it, the TQ3 quant might be worth considering. But I'm far from an expert on quants, that's why I asked :) |
|
My 2 cents on this PR, not that it matters, it's a self-contained contained change, doesn't negatively effect the rest of the codebase, gets a thumbs up from me. I did start another project, not sure how long it will last: https://github.com/ericcurtin/inferrs It has a different vibe. The goal is to get llama.cpp-like functionality via: Get vllm-like functionality via: And the default is closest to candle: would love PRs like this in that project. But I think this has a half-decent chance of getting merged here (and I understand the hesitancy, there's a lot of code to maintain here in the project as a whole). |
This is fair feedback. |
|
@CISC Did a new benchmark run on today's master Qwen3.5-4B-Q4_K_M, CPU-only, 4 threads, flash attention enabled Perplexity & KL Divergence (wikitext-2 test, 5 chunks, ctx=512)
Decode throughput (p=256, n=64, 3 repetitions)
Not a huge breakthrough by any means, but still some improvement. As a CPU-only implementation according to the rules, the throughput is obviously still a matter to be taken care of by CUDA/ROCm kernels, so I hope the potential is visible |
If a PR blatantly violates the contributing guidelines it should be closed immediately but that does not automatically mean that a PR following them is worth reviewing in terms of opportunity cost. In any case, if you just look at the numbers from the OP the |
|
Look at 744c0c7 this is art |
|
The numbers for tbq3_0 look great, so I hope this will be merged. |
|
fwiw there are those of us (@TheTom, myself, and many others) that have been, and continue to, work on GPU optimized implementations. The CPU one is by far going to be the most underwhelming if you're comparing benchmarks. Yes, now upstream does the rotation which is a lot of the work of TurboQuant (actually PolarQuant if you don't include QJL which kills speed but whatever). This was a good call out by @Mushoz! However, as @TheTom mentions above there are many other improvements that make getting this started worthwhile. Asymmetric KV compression is actually massive and works regardless of TurboQuant or not. In mine, Tom's and other people's extensive testing it is holding up very well - massive improvement in quantization, improvement in prefill and decode speed, with very little to even improved PPL and KLD. Truly a thing of beauty that @TheTom discovered there last week. I very much respect @elusznik's intent to stick to guidelines and make this a minimal change in order to get the ball rolling. It's intimidating coming into such a popular repo with so much churn so respect and appreciation where it's due. With all due respect @ggerganov (and I have an immense amount), shitting on it and just calling it slop without any actual critique is just rude and makes you look like a jerk. I understand there's a lot of AI shit PRs around but this ain't one of them. My two cents, is I think it is in everyone's best interest (the OS AI community) to help ensure that we don't have a ton of forks with disparate but material performance improvements littered about. Hoping this lands (or something similar) and some of us can layer on discreet upstream PRs for some of the further decode, prefill and memory improvements we've identified. |
|
It looks good, waiting for code to be merged hungry. |
|
No news so far? I maybe expect too much but seeing only %10 improvement made me sad. Although it looks like cpu only. I want to use that for 6800xt gpu. Currently using gemma-4-26B-A4B-it-UD-IQ4_XS.gguf and just want to get best result from that. And indeed local LLMs getting better each day which is great news. |
|
@ekryski thanks for the endorsement and the kind words, the slop comment made me feel kinda bad after putting in a couple of days' serious work into this |
|
Instead of looking at new quants, you can take a look at existing quants for kv cache here: #21551 |
While looking at existing quants, I found an outlier kv quant pair that seems to perform better than quant neighbors. q3_K for K and q2_K for V. |
Research of TurboQuant paper, QJL reference code, and community implementations reveals critical insights: - QJL should be dropped entirely (MSE-only beats MSE+QJL in practice) - Nobody uses TurboQuant for V (all use group quant or fp16) - Without QJL, TBQ4 gets 16 centroids (matching q4_0 level count) - PR ggml-org#21089 got PPL=9.046 (matching our 9.53) — gap vs q4_0 is expected - The paper reports LongBench/NIAH, not perplexity Also adds build time tracking log documenting CUDA template compilation issues (ptxas uses 36GB+ RAM, 2+ hours for fattn.cu with VEC templates).
Summary
This PR adds CPU-only TurboQuant KV-cache support for two new cache types:
tbq3_0tbq4_0The scope is intentionally narrow for the first PR:
TBQonly (TBQP/ Q-prod is left for follow-up work)That keeps the initial landing aligned with the contributor guidance for new features and new
ggml_typeadditions: start with CPU support first, keep the PR reviewable, and add backend support in follow-up PRs.What changed
GGML_TYPE_TBQ3_0andGGML_TYPE_TBQ4_0vec_dotsupport so CPU flash attention can consume the new KV typesggmltype traits and quantization entry pointstbq3_0/tbq4_0in CLI KV-cache argumentsllama-benchandquantizesupport for the new typestest-quantize-fnsGET_ROWS,SET_ROWS,CPY, andFLASH_ATTN_EXTWhy this scope
I started from a broader TurboQuant implementation, but for the first upstream PR I cut the surface down to the part that is strongest on the current CPU-only evaluation:
tbq4_0is the best-balanced TurboQuant point heretbq3_0is the memory-first optionTBQP/ split-outlier path is better handled as follow-up work after the plainTBQCPU base landsBlock layout
tbq3_0: 98 bytes / 256 elements = 3.0625 bits / elementtbq4_0: 130 bytes / 256 elements = 4.0625 bits / elementCPU results
Model:
Qwen3.5-4B-Q4_K_M.ggufSettings:
flash_attn=onllama-benchwithpp32/tg8llama-perplexityonwikitext-2-raw/wiki.test.rawctx=256,chunks=4f16q8_0q4_0tbq3_0tbq4_0Key takeaways:
tbq4_0is the best-balanced TurboQuant point in this CPU-only sweep.tbq4_0reduces KV cache below stockq4_0while keeping similar KLD and slightly better perplexity in this run.tbq3_0pushes KV memory lower again, with the expected quality tradeoff.Plots
KV cache memory usage
Throughput
Compression vs speed
Ablation: KV size vs KLD
Validation
Built locally:
cmake -S . -B build-cpu-pr -DCMAKE_BUILD_TYPE=Releasecmake --build build-cpu-pr --target test-quantize-fns test-backend-ops llama-bench llama-cli llama-perplexity -j4Checks run:
./build-cpu-pr/bin/test-quantize-fns./build-cpu-pr/bin/test-backend-ops test -b CPU -o GET_ROWS,SET_ROWS,CPY,FLASH_ATTN_EXT -p 'tbq'llama-benchCPU comparison vsf16,q8_0,q4_0llama-perplexity+ KL divergence comparison vsf16Follow-up work
Planned follow-ups after this CPU base:
TBQP/ Q-prod variantsAcknowledgements
This work was informed by:
mudler/llama.cppfeat/turbo-quantAaryan-Kapoor/llama.cppturboquant-tq3_0TheTom/turboquant_plustonbistudio/turboquant-pytorchAI usage disclosure
AI tools were used in an assistive capacity for exploration, mechanical refactoring, test/benchmark scripting, and draft review text. The code and measurements in this PR were manually reviewed locally, the relevant checks were run manually, and I can explain the submitted changes and benchmark setup in detail.