diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2023-03-21 17:29:41 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-03-21 17:29:41 +0200 |
commit | eb34620aeceaf9d9df7fcb19acc17ad41b9f60f8 (patch) | |
tree | efa10a024845c09ed2630bbe0675618cc7ba0a50 /quantize.cpp | |
parent | 2e664f1ff413995506c9a54f3a8d5b8c64e37a91 (diff) |
Add tokenizer test + revert to C++11 (#355)
* Add test-tokenizer-0 to do a few tokenizations - feel free to expand
* Added option to convert-pth-to-ggml.py script to dump just the vocabulary
* Added ./models/ggml-vocab.bin containing just LLaMA vocab data (used for tests)
* Added utility to load vocabulary file from previous point (temporary implementation)
* Avoid using std::string_view and drop back to C++11 (hope I didn't break something)
* Rename gpt_vocab -> llama_vocab
* All CMake binaries go into ./bin/ now
Diffstat (limited to 'quantize.cpp')
-rw-r--r-- | quantize.cpp | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/quantize.cpp b/quantize.cpp index 07db33a3..b90f34f4 100644 --- a/quantize.cpp +++ b/quantize.cpp @@ -44,7 +44,7 @@ bool llama_model_quantize(const std::string & fname_inp, const std::string & fna return false; } - gpt_vocab vocab; + llama_vocab vocab; printf("%s: loading model from '%s'\n", __func__, fname_inp.c_str()); |