Skip to content

scripts: add bpw per layer and model#14703

Merged
CISC merged 1 commit intoggml-org:masterfrom
EAddario:gguf_dump
Jul 15, 2025
Merged

scripts: add bpw per layer and model#14703
CISC merged 1 commit intoggml-org:masterfrom
EAddario:gguf_dump

Conversation

@EAddario
Copy link
Copy Markdown
Contributor

Since llama-quantize allows users to select a wide range of quant types, sometimes it may not obvious to determine what's the most appropriate weight encoding scheme to use when following the GGUF naming conventions.

This PR modifies gguf_dump.py to display the bits per weight (bpw) for each layer, and for the overall model, when using the --markdown option.

@github-actions github-actions Bot added the python python script changes label Jul 15, 2025
@CISC CISC merged commit c81f419 into ggml-org:master Jul 15, 2025
4 checks passed
@EAddario EAddario deleted the gguf_dump branch July 16, 2025 06:39
@EAddario
Copy link
Copy Markdown
Contributor Author

Thank you @CISC

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

python python script changes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants