diff options
author | Olivier Chafik <ochafik@users.noreply.github.com> | 2023-08-23 20:33:05 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-08-23 22:33:05 +0300 |
commit | 95385241a91a616788a3bb76d12c9b7b2379ca2d (patch) | |
tree | c58bfb2b9299889474b12a2762dc4c15d19683c0 /examples/convert-llama2c-to-ggml/README.md | |
parent | 335acd2ffd7b04501c6d8773ab9fcee6e7bf8639 (diff) |
examples : restore the functionality to import llama2.c models (#2685)
* Fix import of llama2.c models that don't share weights between embedding layers
* llama2c: reinstate ggmlv3 conversion output + update readme w/ gguf conv
* llama2.c: comment out legacy "load from ggml model" logic
* llama2.c: convert special-cased "<0xXX>" single byte tokens from tokenizer.bin
Diffstat (limited to 'examples/convert-llama2c-to-ggml/README.md')
-rw-r--r-- | examples/convert-llama2c-to-ggml/README.md | 14 |
1 files changed, 9 insertions, 5 deletions
diff --git a/examples/convert-llama2c-to-ggml/README.md b/examples/convert-llama2c-to-ggml/README.md index 868f57d6..fd561fcb 100644 --- a/examples/convert-llama2c-to-ggml/README.md +++ b/examples/convert-llama2c-to-ggml/README.md @@ -12,15 +12,19 @@ usage: ./convert-llama2c-to-ggml [options] options: -h, --help show this help message and exit - --copy-vocab-from-model FNAME model path from which to copy vocab (default 'models/ggml-vocab.bin') + --copy-vocab-from-model FNAME model path from which to copy vocab (default 'tokenizer.bin') --llama2c-model FNAME [REQUIRED] model path from which to load Karpathy's llama2.c model --llama2c-output-model FNAME model path to save the converted llama2.c model (default ak_llama_model.bin') ``` -An example command is as follows: +An example command using a model from [karpathy/tinyllamas](https://huggingface.co/karpathy/tinyllamas) is as follows: -`$ ./convert-llama2c-to-ggml --copy-vocab-from-model <ggml-vocab.bin> --llama2c-model <llama2.c model path> --llama2c-output-model <ggml output model path>` +`$ ./convert-llama2c-to-ggml --copy-vocab-from-model ../llama2.c/tokenizer.bin --llama2c-model stories42M.bin --llama2c-output-model stories42M.ggmlv3.bin` -Now you can use the model with command like: +For now the generated model is in the legacy GGJTv3 format, so you need to convert it to gguf manually: -`$ ./main -m <ggml output model path> -p "One day, Lily met a Shoggoth" -n 500 -c 256 -eps 1e-5` +`$ python ./convert-llama-ggmlv3-to-gguf.py --eps 1e-5 --input stories42M.ggmlv3.bin --output stories42M.gguf.bin` + +Now you can use the model with a command like: + +`$ ./main -m stories42M.gguf.bin -p "One day, Lily met a Shoggoth" -n 500 -c 256` |