Skip to content

ggml-cpu: extend RVV repack GEMM and GEMV to other VLENs#8

Merged
taimur-10x merged 3 commits intomasterfrom
10x/riscv-quant-repack-vlens
Mar 18, 2026
Merged

ggml-cpu: extend RVV repack GEMM and GEMV to other VLENs#8
taimur-10x merged 3 commits intomasterfrom
10x/riscv-quant-repack-vlens

Conversation

@taimur-10x
Copy link
Copy Markdown
Collaborator

@taimur-10x taimur-10x commented Feb 13, 2026

Summary

This PR extends existing repacking and GEMM/GEMV kernels for quantization types to other RVV VLENs 128 to 1024) and adds new kernels for Q5_K and MXFP4.

Key Changes

  • Added repacking RVV GEMM and GEMV kernels for:

    • Q5_K
    • MXFP4
  • Extended the following implementations to VLEN=128 to 1024.

    • Q4_0
    • Q8_0
    • Q2_K
    • Q4_K
    • IQ4_NL

Tile Sizes

VLEN Tiling LHS RHS OUT
128 4, 8, 1 4x1 8x1 4x8
256 4, 16, 1 4x1 16x1 4x16
512 4, 32, 1 4x1 32x1 4x32
1024 4, 64, 1 4x1 64x1 4x64

Testing

Kernels were functionally tested on QEMU for VLENs (128-bit to 1024-bit) for a range of input sizes.

Benchmarking Results

End-to-end benchmarking on BananaPI-BPI F3 (VLEN=256) using llama-bench.

Q5_K

Prompt Processing

Model Prompt Size Repack GEMM 4x16x1 (Tok/s) Vec Dot (Tok/ s)
Tinyllama Q5_K 1.1B 32 25.62 6.94
Tinyllama Q5_K 1.1B 64 24.74 6.84
Tinyllama Q5_K 1.1B 128 24.71 6.53
Tinyllama Q5_K 1.1B 256 24.75 6.72
Tinyllama Q5_K 1.1B 512 23.71 6.99

Token Generation

Model Tokens Generated Repack GEMV 1x16x1 (Tok/s) Vec Dot (Tok/s)
Tinyllama Q5_K 1.1B 10 6.82 5.98
Tinyllama Q5_K 1.1B 16 6.48 5.29
Tinyllama Q5_K 1.1B 32 6.78 5.75
Tinyllama Q5_K 1.1B 64 6.54 5.15
Tinyllama Q5_K 1.1B 100 6.79 5.22

MXFP4

Prompt Processing

Model Prompt Size Repack GEMM 4x16x1 (Tok/s) Vec Dot (Tok/s)
Tinyllama MXFP4 1.1B 32 21.08 8.67
Tinyllama MXFP4 1.1B 64 20.13 8.91
Tinyllama MXFP4 1.1B 128 20.39 8.77
Tinyllama MXFP4 1.1B 256 19.04 8.60
Tinyllama MXFP4 1.1B 512 19.47 8.33

Token Generation

Model Tokens Generated Repack GEMV 1x16x1 (Tok/s) Vec Dot (Tok/s)
Tinyllama MXFP4 1.1B 10 8.53 7.58
Tinyllama MXFP4 1.1B 16 8.25 7.66
Tinyllama MXFP4 1.1B 32 8.17 7.57
Tinyllama MXFP4 1.1B 64 8.12 7.06
Tinyllama MXFP4 1.1B 100 7.84 6.53

Future Work

Subsequent PRs plan to add these kernels for Q3_K and Q6_K

@taimur-10x taimur-10x marked this pull request as draft February 13, 2026 14:24
@github-actions github-actions Bot added the ggml label Feb 13, 2026
@taimur-10x taimur-10x changed the base branch from master to 10x/riscv-quant-repack February 13, 2026 14:24
@taimur-10x taimur-10x force-pushed the 10x/riscv-quant-repack branch from ccce840 to 35bcbdc Compare February 14, 2026 17:04
@taimur-10x taimur-10x force-pushed the 10x/riscv-quant-repack-vlens branch from 0c39847 to 3d2f7cf Compare February 24, 2026 08:02
@taimur-10x taimur-10x force-pushed the 10x/riscv-quant-repack-vlens branch from e9292b0 to 9601769 Compare March 4, 2026 12:42
@taimur-10x taimur-10x changed the base branch from 10x/riscv-quant-repack to master March 4, 2026 12:43
@taimur-10x taimur-10x changed the base branch from master to 10x/riscv-quant-repack March 4, 2026 12:44
@taimur-10x taimur-10x force-pushed the 10x/riscv-quant-repack branch 3 times, most recently from fb95e74 to a037b85 Compare March 4, 2026 16:17
@taimur-10x taimur-10x force-pushed the 10x/riscv-quant-repack-vlens branch 3 times, most recently from 5f26b14 to 053cb1c Compare March 4, 2026 16:54
@taimur-10x taimur-10x marked this pull request as ready for review March 4, 2026 16:54
@taimur-10x taimur-10x self-assigned this Mar 10, 2026
@taimur-10x taimur-10x changed the base branch from 10x/riscv-quant-repack to master March 12, 2026 10:55
@taimur-10x taimur-10x changed the title ggml-cpu: add RVV repack GEMM and GEMV for Q3_K, Q5_K, Q6_K ggml-cpu: extend RVV repack GEMM and GEMV to other VLENs Mar 15, 2026
@taimur-10x taimur-10x force-pushed the 10x/riscv-quant-repack-vlens branch 4 times, most recently from 315d3d9 to 1cf0b0e Compare March 15, 2026 23:11
taimur-10x and others added 2 commits March 18, 2026 21:06
@taimur-10x taimur-10x force-pushed the 10x/riscv-quant-repack-vlens branch from 1cf0b0e to d5a370f Compare March 18, 2026 16:20
@taimur-10x taimur-10x force-pushed the 10x/riscv-quant-repack-vlens branch from d5a370f to 04719ae Compare March 18, 2026 16:25
@taimur-10x taimur-10x merged commit 6a5ad20 into master Mar 18, 2026
33 of 51 checks passed
rehan-10xengineer pushed a commit that referenced this pull request Apr 14, 2026
)

* ggml: backend-agnostic tensor parallelism

* support for GPT-OSS, Qwen 3 MoE

* partial Vulkan fix

* add support for 4/8 GPUs

* unconditional peer access

* re-use buffers + ggml contexts

* fix output pattern

* NCCL support

* GGML: HIP: add RCCL support

* Remove shfl and AllReduce from backend interface

* move allocation workaround out of ggml-alloc.c

* 2d tensor set/get support

* Fix the seg fault without NCCL

* Apply suggestion from JohannesGaessler

* support for tensor dims % n_devs != 0

* fix view_offs scaling

* arbitrary num. of GPUs/tensor split

* fix compilation

* better granularity estimate

* Support device-specific host buffer types if all underlying backends expose the same type. This allows using pinned memory instead of pageable memory for CUDA.

Fix compilation errors.

* partial Qwen 3 Next support

* Fix qwen3 30b (#8)

* Fix crash with Qwen-30B-A3B Q4_0

Qwen-30B-A3B Q4_0 has an intermediate dimension of 768. Using a granularity of 256 forces an uneven split between GPUs, which is not supported by the current implementation.

* Decide block size based on tensor quantization type

* Fix crashes due to KV cache serialization (#9)

KV cache serialization requires non-zero offsets on the tensor. Add support in the meta backend to set/get a tensor with a non-zero offset.

* metal : fix build (#7)

* static memory allocations, fix usage count

* fix tensor granularity

* more even memory distribution

* use BF16 for allreduce

* rebase fixup

* better error message for unsupported architectures

* Fix device mismatch during scatter of allReduce. (#11)

There is a mismatch between the dst buffer device and the backend device, causing the use of sync copies

* Enable the previous allreduce implementation. It is better in both perf and stability (#12)

* delay AllReduce for Moe for less I/O

* build : clean-up compile warnings

* backend : move most of the meta backend API to ggml-backend-impl.h

* cont : hide unused public API in the implementation

* llama : use llama_device + remove ggml_backend_dev_is_meta()

* ggml-backend : remove unused alloc include

* minor : remove regex include

* ggml : introduce ggml-ext.h for staging new APIs

* rebase fixup

* fix tests

* llama : more robust logic for determining Meta devices (ggml-org#16)

* llama : more robust logic for determining Meta devices

* cont : fix devs size check

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* cont : fix log type

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* disable roundtrip for meta backend

* fix arch selection

* Qwen 3.5 support

* fix Gemma 4 MoE

* fix OpenVino, SYCL

* fix test-llama-archs for CPU-only builds

* Fix Qwen 3.5 MoE

* disable meta backend tests for WebGPU

* tests : filter CPU-based devices from the Meta backend tests (ggml-org#17)

* meta : formatting, naming, indentation (ggml-org#18)

* formatting : llama-model.cpp

* formatting : ggml-ext.h

* formatting : ggml-backend-meta.cpp

* meta : add TODO

* add documentation

* better error messages

* fix GPT-OSS

---------

Co-authored-by: Carl Philipp Klemm <carl@uvos.xyz>
Co-authored-by: Gaurav Garg <gaugarg@nvidia.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant