diff options
author | Qin Yue Chen <71813199+chenqiny@users.noreply.github.com> | 2023-10-20 06:19:40 -0500 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-10-20 14:19:40 +0300 |
commit | 8cf19d60dc93809db8e51fedc811595eed9134c5 (patch) | |
tree | 879c1861fb50748c02ec031a1dcc3f6e732ca366 /examples/convert-llama2c-to-ggml/convert-llama2c-to-ggml.cpp | |
parent | a0edf73bda31c7c4e649e6f07c6fd30a729929cd (diff) |
gguf : support big endian platform (#3552)
* check whether platform is 390x if yes->do not import immintrin.h
* support s390x big endian
* support --bigendian option for s390x
1. verified with baichuan7b-chat with float 16 on s390x
2. verified with baichuan7b-chat
3. verified with chinese-alpaca-2-13b-f16
* update format based on editor-config checker result
* Update convert-baichuan-hf-to-gguf.py
* 1. check in ggml.c if endianess is not match
2. update GGUF version
3. change get_pack_prefix to property
4. update information log
* always use "GGUF" as beginng of GGUF file
* Compare "GGUF" with file header char by char
1. Set GGUF_MAGIC to "GGUF" string instead of int value
2. Compare "GGUF" char by char to ensure its byte order
3. Move bytes swap code from convert.py to gguf.py write_tensor_data
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Diffstat (limited to 'examples/convert-llama2c-to-ggml/convert-llama2c-to-ggml.cpp')
-rw-r--r-- | examples/convert-llama2c-to-ggml/convert-llama2c-to-ggml.cpp | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/examples/convert-llama2c-to-ggml/convert-llama2c-to-ggml.cpp b/examples/convert-llama2c-to-ggml/convert-llama2c-to-ggml.cpp index c291f0ad..cae3bf3c 100644 --- a/examples/convert-llama2c-to-ggml/convert-llama2c-to-ggml.cpp +++ b/examples/convert-llama2c-to-ggml/convert-llama2c-to-ggml.cpp @@ -536,7 +536,7 @@ static bool is_ggml_file(const char * filename) { if (file.size < 4) { return false; } - uint32_t magic = file.read_u32(); + std::string magic = file.read_string(4); return magic == GGUF_MAGIC; } |