Skip to content

llama.cpp: add MATMUL_INT8 capability to system_info

d8f132d
Select commit
Loading
Failed to load commit list.
Merged

ggml: aarch64: implement mmla kernels for q8_0_q8_0, q4_0_q8_0 and q4_1_q8_1 quantized gemm #4966

llama.cpp: add MATMUL_INT8 capability to system_info
d8f132d
Select commit
Loading
Failed to load commit list.

Workflow runs completed with no jobs