Skip to content

Add ggml-model-*.bin checksums for 7B, 13B, 30B#1088

Merged
prusnak merged 2 commits intoggml-org:masterfrom
sw:checksum
Apr 20, 2023
Merged

Add ggml-model-*.bin checksums for 7B, 13B, 30B#1088
prusnak merged 2 commits intoggml-org:masterfrom
sw:checksum

Conversation

@sw
Copy link
Copy Markdown
Contributor

@sw sw commented Apr 20, 2023

With the recent flurry of new formats, I didn't want to keep the *.pth files around. So here are the SHA-256 checksum of the ggml model files up to 30B.

Unless I've missed something in the recent commits, model quantization should still be deterministic.

@prusnak
Copy link
Copy Markdown
Contributor

prusnak commented Apr 20, 2023

I'll check these and also try to provide 65B values

Copy link
Copy Markdown
Contributor

@prusnak prusnak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I checked the hashes provided in the first commit
  2. I computed the 65 hashes and added them as a second commit

LGTM

@prusnak prusnak merged commit 2510c18 into ggml-org:master Apr 20, 2023
@sw sw deleted the checksum branch April 20, 2023 22:12
Seunghhon pushed a commit to Seunghhon/llama.cpp that referenced this pull request Apr 26, 2026
* Add ggml-model-*.bin checksums for 7B, 13B, 30B
* Add ggml-model-*.bin checksums for 65B

---------

Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
phuongncn pushed a commit to phuongncn/llama.cpp-gx10-dgx-sparks-deepseekv4 that referenced this pull request Apr 28, 2026
* Add ggml-model-*.bin checksums for 7B, 13B, 30B
* Add ggml-model-*.bin checksums for 65B

---------

Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
phuongncn pushed a commit to phuongncn/llama.cpp-gx10-dgx-sparks-deepseekv4 that referenced this pull request Apr 28, 2026
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants