Skip to content

iamwavecut/llama-cpp-triattention

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8,890 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Important

This repository is an experimental mirror of the upstream project atomicmilkshake/llama-cpp-turboquant.

Warning

DO NOT USE THIS REPOSITORY FOR PRODUCTION.


llama.cpp — TurboQuant + TriAttention

License: MIT GitHub HuggingFace

A fork of llama.cpp with two major additions:

  • TurboQuant — custom low-bit quantization formats (turbo2, turbo3, turbo4) with hardware-optimised CUDA kernels for faster inference with smaller memory footprint
  • TriAttention — GPU-accelerated KV cache pruning (arXiv 2604.04921) that scores token importance using RoPE-inverted key vectors and evicts low-value tokens, enabling long-context inference within a fixed memory budget

Pre-built Windows Binaries

Download the latest Release build (Windows x64, CUDA 13, RTX 2000+) from Hugging Face:

🤗 atomicmilkshake/llama-cpp-turboquant-binaries

Requires CUDA 13.x runtime (cublasLt64_13.dll). Install the CUDA Toolkit or the CUDA runtime redistributable if you don't have it.


TriAttention

TriAttention keeps the KV cache within a fixed token budget by periodically scoring cached tokens and evicting the least important ones.

There are now two runtime modes:

Mode Status What it needs
calibrated Canonical Embedded calibration in the GGUF or an external .triattention artifact
experimental fallback Heuristic No calibration file; runtime uses norm+recency scoring

Calibration remains the paper-aligned path. The fallback mode exists so inference can still run without precomputed query statistics, but it is not equivalent to the method described in the paper.

Performance (Qwen3-8B Q4_K_M, RTX 3080, -c 512)

Mode Prune overhead Generation speed
No budget limit 17.5 tok/s
CPU scoring ~5,900 ms/event 17.5 tok/s
GPU scoring ~4–9 ms/event 75.0 tok/s

GPU scoring is ~1,000× faster than CPU. The 4.3× generation speedup comes from keeping the KV cache within VRAM budget (no eviction stalls, consistent flash-attention batch sizes).

Quick start

cmake --build build --target llama-server llama-triattention-calibrate -j

Build a calibrated GGUF from a plain-text corpus:

./build/bin/llama-triattention-calibrate -m YourModel.gguf -f corpus.txt -o YourModel.triattention.gguf \
  -c 8192 -b 2048

To emit only an external calibration artifact instead:

./build/bin/llama-triattention-calibrate -m YourModel.gguf -f corpus.txt \
  --external-out model.triattention --no-embed \
  -c 8192 -b 2048

Inspect or validate either form:

./build/bin/llama-triattention-calibrate --inspect YourModel.triattention.gguf
./build/bin/llama-triattention-calibrate --validate YourModel.triattention.gguf
./build/bin/llama-triattention-calibrate --inspect model.triattention
./build/bin/llama-triattention-calibrate --validate model.triattention -m YourModel.gguf

Run calibrated TriAttention:

./build/bin/llama-server -m YourModel.triattention.gguf -c 32768 -ngl 99 --port 8080 \
  --triattention-budget 4096 \
  --triattention-window 256 \
  --triattention-log

Or keep the original model and pass an explicit external artifact:

./build/bin/llama-server -m YourModel.gguf -c 32768 -ngl 99 --port 8080 \
  --triattention-stats model.triattention \
  --triattention-budget 4096 \
  --triattention-window 256 \
  --triattention-log

Run experimental fallback without a stats file:

./build/bin/llama-server -m YourModel.gguf -c 32768 -ngl 99 --port 8080 \
  --triattention-fallback auto \
  --triattention-budget 4096 \
  --triattention-window 256 \
  --triattention-log

CLI flags

Flag Default Description
--triattention-stats <file> (none) Explicit external calibration artifact. Checked only when the loaded GGUF does not already embed calibration
--triattention-budget <n> 512 Maximum KV tokens to retain after each prune
--triattention-window <n> 64 Pruning interval in decode tokens; the most recent N positions are also protected
--triattention-offset-max <n> 65536 Maximum geometric offset used by trig scoring
--triattention-mode <mode> global global, per-kv-head, or per-layer-head
--triattention-trigger <mode> interval interval or slack
--triattention-agg <mode> mean mean or max aggregation over geometric offsets
--triattention-fallback <mode> auto auto, off, or hybrid-norm-recency
--triattention-fallback-recency-weight <f> 0.25 Blend factor for the fallback recency term
--triattention-log off Print a line for each prune event
--triattention-no-protect-prefill off Allow evicting prompt (prefill) tokens

How it works

  1. A prune is triggered either every window decode tokens or once occupancy reaches budget + window, depending on --triattention-trigger
  2. Prefix tokens can be protected, and the most recent window positions are always protected
  3. In calibrated mode, cached keys are scored against offline query statistics collected from pre-RoPE Q
  4. In fallback mode, cached keys are scored with a norm+recency heuristic using the same RoPE-inverted key path
  5. The top-budget positions are kept and the rest are evicted

Runtime calibration resolution order:

  1. Embedded calibration inside the loaded GGUF
  2. Explicit --triattention-stats <file>
  3. Sidecar <model>.triattention next to the loaded model
  4. Experimental fallback, if enabled

TurboQuant

TurboQuant provides three custom quantization formats that outperform standard GGUF quants at equivalent bit widths:

Format Bits/weight Notes
turbo4_0 ~4.0 Drop-in replacement for q4_0, with rotation-based clustering
turbo3_0 ~3.0 Sub-byte with Hadamard pre-rotation
turbo2_0 ~2.0 Aggressive compression with WHT-space centroids

All formats have CUDA kernels optimised for Turing+ (SM75) and Ampere (SM80/86) architectures.


Building from source

Requirements

  • Windows 10/11 or Linux
  • CUDA Toolkit 12.x or 13.x
  • Visual Studio 2022+ with C++ workload (Windows) or GCC 11+ (Linux)
  • CMake 3.21+

Windows (CUDA)

cmake -B build -G "Visual Studio 18 2022" -A x64 `
  -DGGML_CUDA=ON `
  -DCMAKE_CUDA_ARCHITECTURES="75;80;86;89;120;121"

cmake --build build --config Release --target llama-server -j

Linux (CUDA)

cmake -B build \
  -DGGML_CUDA=ON \
  -DCMAKE_CUDA_ARCHITECTURES="75;80;86;89;120;121" \
  -DCMAKE_BUILD_TYPE=Release

cmake --build build --target llama-server -j$(nproc)

Branches

Branch Description
triattention Default in this mirror — TurboQuant + TriAttention
feature/triattention Upstream branch with TriAttention development
feature/turboquant-kv-cache TurboQuant base (pre-TriAttention)
master Upstream llama.cpp base

Credits


For the original llama.cpp documentation, see docs/ or github.com/ggml-org/llama.cpp.

LLM inference in C/C++

Recent API changes

Hot topics


Quick start

Getting started with llama.cpp is straightforward. Here are several ways to install it on your machine:

Once installed, you'll need a model to work with. Head to the Obtaining and quantizing models section to learn more.

Example command:

# Use a local model file
llama-cli -m my_model.gguf

# Or download and run a model directly from Hugging Face
llama-cli -hf ggml-org/gemma-3-1b-it-GGUF

# Launch OpenAI-compatible API server
llama-server -hf ggml-org/gemma-3-1b-it-GGUF

Description

The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide range of hardware - locally and in the cloud.

  • Plain C/C++ implementation without any dependencies
  • Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks
  • AVX, AVX2, AVX512 and AMX support for x86 architectures
  • RVV, ZVFH, ZFH, ZICBOP and ZIHINTPAUSE support for RISC-V architectures
  • 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use
  • Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for AMD GPUs via HIP and Moore Threads GPUs via MUSA)
  • Vulkan and SYCL backend support
  • CPU+GPU hybrid inference to partially accelerate models larger than the total VRAM capacity

The llama.cpp project is the main playground for developing new features for the ggml library.

Models

Typically finetunes of the base models below are supported as well.

Instructions for adding support for new models: HOWTO-add-model.md

Text-only

Multimodal

Bindings
UIs

(to have a project listed here, it should clearly state that it depends on llama.cpp)

Tools
  • akx/ggify – download PyTorch models from Hugging Face Hub and convert them to GGML
  • akx/ollama-dl – download models from the Ollama library to be used directly with llama.cpp
  • crashr/gppm – launch llama.cpp instances utilizing NVIDIA Tesla P40 or P100 GPUs with reduced idle power consumption
  • gpustack/gguf-parser - review/check the GGUF file and estimate the memory usage
  • Styled Lines (proprietary licensed, async wrapper of inference part for game development in Unity3d with pre-built Mobile and Web platform wrappers and a model example)
  • unslothai/unsloth – 🦥 exports/saves fine-tuned and trained models to GGUF (Apache-2.0)
Infrastructure
  • Paddler - Open-source LLMOps platform for hosting and scaling AI in your own infrastructure
  • GPUStack - Manage GPU clusters for running LLMs
  • llama_cpp_canister - llama.cpp as a smart contract on the Internet Computer, using WebAssembly
  • llama-swap - transparent proxy that adds automatic model switching with llama-server
  • Kalavai - Crowdsource end to end LLM deployment at any scale
  • llmaz - ☸️ Easy, advanced inference platform for large language models on Kubernetes.
  • LLMKube - Kubernetes operator for llama.cpp with multi-GPU and Apple Silicon Metal support"
Games
  • Lucy's Labyrinth - A simple maze game where agents controlled by an AI model will try to trick you.

Supported backends

Backend Target devices
Metal Apple Silicon
BLAS All
BLIS All
SYCL Intel and Nvidia GPU
OpenVINO [In Progress] Intel CPUs, GPUs, and NPUs
MUSA Moore Threads GPU
CUDA Nvidia GPU
HIP AMD GPU
ZenDNN AMD CPU
Vulkan GPU
CANN Ascend NPU
OpenCL Adreno GPU
IBM zDNN IBM Z & LinuxONE
WebGPU [In Progress] All
RPC All
Hexagon [In Progress] Snapdragon
VirtGPU VirtGPU APIR

Obtaining and quantizing models

The Hugging Face platform hosts a number of LLMs compatible with llama.cpp:

You can either manually download the GGUF file or directly use any llama.cpp-compatible models from Hugging Face or other model hosting sites, by using this CLI argument: -hf <user>/<model>[:quant]. For example:

llama-cli -hf ggml-org/gemma-3-1b-it-GGUF

By default, the CLI would download from Hugging Face, you can switch to other options with the environment variable MODEL_ENDPOINT. The MODEL_ENDPOINT must point to a Hugging Face compatible API endpoint.

After downloading a model, use the CLI tools to run it locally - see below.

llama.cpp requires the model to be stored in the GGUF file format. Models in other data formats can be converted to GGUF using the convert_*.py Python scripts in this repo.

The Hugging Face platform provides a variety of online tools for converting, quantizing and hosting models with llama.cpp:

To learn more about model quantization, read this documentation

A CLI tool for accessing and experimenting with most of llama.cpp's functionality.

  • Run in conversation mode

    Models with a built-in chat template will automatically activate conversation mode. If this doesn't occur, you can manually enable it by adding -cnv and specifying a suitable chat template with --chat-template NAME

    llama-cli -m model.gguf
    
    # > hi, who are you?
    # Hi there! I'm your helpful assistant! I'm an AI-powered chatbot designed to assist and provide information to users like you. I'm here to help answer your questions, provide guidance, and offer support on a wide range of topics. I'm a friendly and knowledgeable AI, and I'm always happy to help with anything you need. What's on your mind, and how can I assist you today?
    #
    # > what is 1+1?
    # Easy peasy! The answer to 1+1 is... 2!
  • Run in conversation mode with custom chat template
    # use the "chatml" template (use -h to see the list of supported templates)
    llama-cli -m model.gguf -cnv --chat-template chatml
    
    # use a custom template
    llama-cli -m model.gguf -cnv --in-prefix 'User: ' --reverse-prompt 'User:'
  • Constrain the output with a custom grammar
    llama-cli -m model.gguf -n 256 --grammar-file grammars/json.gbnf -p 'Request: schedule a call at 8pm; Command:'
    
    # {"appointmentTime": "8pm", "appointmentDetails": "schedule a a call"}

    The grammars/ folder contains a handful of sample grammars. To write your own, check out the GBNF Guide.

    For authoring more complex JSON grammars, check out https://grammar.intrinsiclabs.ai/

A lightweight, OpenAI API compatible, HTTP server for serving LLMs.

  • Start a local HTTP server with default configuration on port 8080
    llama-server -m model.gguf --port 8080
    
    # Basic web UI can be accessed via browser: http://localhost:8080
    # Chat completion endpoint: http://localhost:8080/v1/chat/completions
  • Support multiple-users and parallel decoding
    # up to 4 concurrent requests, each with 4096 max context
    llama-server -m model.gguf -c 16384 -np 4
  • Enable speculative decoding
    # the draft.gguf model should be a small variant of the target model.gguf
    llama-server -m model.gguf -md draft.gguf
  • Serve an embedding model
    # use the /embedding endpoint
    llama-server -m model.gguf --embedding --pooling cls -ub 8192
  • Serve a reranking model
    # use the /reranking endpoint
    llama-server -m model.gguf --reranking
  • Constrain all outputs with a grammar
    # custom grammar
    llama-server -m model.gguf --grammar-file grammar.gbnf
    
    # JSON
    llama-server -m model.gguf --grammar-file grammars/json.gbnf

A tool for measuring the perplexity 1 (and other quality metrics) of a model over a given text.

  • Measure the perplexity over a text file
    llama-perplexity -m model.gguf -f file.txt
    
    # [1]15.2701,[2]5.4007,[3]5.3073,[4]6.2965,[5]5.8940,[6]5.6096,[7]5.7942,[8]4.9297, ...
    # Final estimate: PPL = 5.4007 +/- 0.67339
  • Measure KL divergence
    # TODO

Benchmark the performance of the inference for various parameters.

  • Run default benchmark
    llama-bench -m model.gguf
    
    # Output:
    # | model               |       size |     params | backend    | threads |          test |                  t/s |
    # | ------------------- | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |
    # | qwen2 1.5B Q4_0     | 885.97 MiB |     1.54 B | Metal,BLAS |      16 |         pp512 |      5765.41 ± 20.55 |
    # | qwen2 1.5B Q4_0     | 885.97 MiB |     1.54 B | Metal,BLAS |      16 |         tg128 |        197.71 ± 0.81 |
    #
    # build: 3e0ba0e60 (4229)

A minimal example for implementing apps with llama.cpp. Useful for developers.

  • Basic text completion
    llama-simple -m model.gguf
    
    # Hello my name is Kaitlyn and I am a 16 year old girl. I am a junior in high school and I am currently taking a class called "The Art of

Contributing

  • Contributors can open PRs
  • Collaborators will be invited based on contributions
  • Maintainers can push to branches in the llama.cpp repo and merge PRs into the master branch
  • Any help with managing issues, PRs and projects is very appreciated!
  • See good first issues for tasks suitable for first contributions
  • Read the CONTRIBUTING.md for more information
  • Make sure to read this: Inference at the edge
  • A bit of backstory for those who are interested: Changelog podcast

Other documentation

Development documentation

Seminal papers and background on the models

If your issue is with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:

XCFramework

The XCFramework is a precompiled version of the library for iOS, visionOS, tvOS, and macOS. It can be used in Swift projects without the need to compile the library from source. For example:

// swift-tools-version: 5.10
// The swift-tools-version declares the minimum version of Swift required to build this package.

import PackageDescription

let package = Package(
    name: "MyLlamaPackage",
    targets: [
        .executableTarget(
            name: "MyLlamaPackage",
            dependencies: [
                "LlamaFramework"
            ]),
        .binaryTarget(
            name: "LlamaFramework",
            url: "https://github.com/ggml-org/llama.cpp/releases/download/b5046/llama-b5046-xcframework.zip",
            checksum: "c19be78b5f00d8d29a25da41042cb7afa094cbf6280a225abe614b03b20029ab"
        )
    ]
)

The above example is using an intermediate build b5046 of the library. This can be modified to use a different version by changing the URL and checksum.

Completions

Command-line completion is available for some environments.

Bash Completion

$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
$ source ~/.llama-completion.bash

Optionally this can be added to your .bashrc or .bash_profile to load it automatically. For example:

$ echo "source ~/.llama-completion.bash" >> ~/.bashrc

Dependencies

  • yhirose/cpp-httplib - Single-header HTTP server, used by llama-server - MIT license
  • stb-image - Single-header image format decoder, used by multimodal subsystem - Public domain
  • nlohmann/json - Single-header JSON library, used by various tools/examples - MIT License
  • miniaudio.h - Single-header audio format decoder, used by multimodal subsystem - Public domain
  • subprocess.h - Single-header process launching solution for C and C++ - Public domain

Footnotes

  1. https://huggingface.co/docs/transformers/perplexity

About

llama.cpp fork with TurboQuant quantization (turbo2/3/4) and TriAttention GPU-accelerated KV cache pruning. 75 tok/s on Qwen3-8B / RTX 3080.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors