fix: Fixes wrong input type for raw_dtype in ggml to gguf scripts#8928
fix: Fixes wrong input type for raw_dtype in ggml to gguf scripts#8928ggerganov merged 1 commit intoggml-org:masterfrom
Conversation
|
Thanks for finding this and fixing it. There has been many refactors lately where the old But I think the type of the Would this work? diff --git a/convert_llama_ggml_to_gguf.py b/convert_llama_ggml_to_gguf.py
index 7b00b439..701df869 100755
--- a/convert_llama_ggml_to_gguf.py
+++ b/convert_llama_ggml_to_gguf.py
@@ -116,7 +116,7 @@ class Tensor:
assert quant is not None, 'Unknown tensor type'
(blksize, tysize) = quant
offset += 12
- self.dtype= dtype
+ self.dtype= gguf.GGMLQuantizationType(dtype)
self.dims = struct.unpack(f'<{n_dims}I', data[offset:offset + (4 * n_dims)])
offset += 4 * n_dims
self.name = bytes(data[offset:offset + name_len]) |
451e52f to
66a4225
Compare
|
@compilade |
|
Is the failed CI check required for merging this PR, do I need to do anything about it? as it does not seem to be related to this PR. |
You don't need to do anything about it; a fix is pending in #8982, and the source of the problem was identified in #7599 (comment). |
|
@compilade @ggerganov |
|
Thanks for the reminder! |
…-org#8928) Co-authored-by: farbod <farbod.bjary82@gmail.com>
…-org#8928) Co-authored-by: farbod <farbod.bjary82@gmail.com>
…-org#8928) Co-authored-by: farbod <farbod.bjary82@gmail.com>
…-org#8928) Co-authored-by: farbod <farbod.bjary82@gmail.com>
A wrong data type has been passed from the
add_tensorandadd_tensor_infoto this function causing another exception while raising another exception. So I changed the input types based on the nameraw_dtypeand later converted it to anGGMLQuantizationTypeobject which can also handle invalid arguments itself.related issue #8929