summaryrefslogtreecommitdiff
path: root/examples/convert-llama2c-to-ggml/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'examples/convert-llama2c-to-ggml/README.md')
-rw-r--r--examples/convert-llama2c-to-ggml/README.md8
1 files changed, 2 insertions, 6 deletions
diff --git a/examples/convert-llama2c-to-ggml/README.md b/examples/convert-llama2c-to-ggml/README.md
index fd561fcb..0f37d295 100644
--- a/examples/convert-llama2c-to-ggml/README.md
+++ b/examples/convert-llama2c-to-ggml/README.md
@@ -12,18 +12,14 @@ usage: ./convert-llama2c-to-ggml [options]
options:
-h, --help show this help message and exit
- --copy-vocab-from-model FNAME model path from which to copy vocab (default 'tokenizer.bin')
+ --copy-vocab-from-model FNAME path of gguf llama model or llama2.c vocabulary from which to copy vocab (default 'models/7B/ggml-model-f16.gguf')
--llama2c-model FNAME [REQUIRED] model path from which to load Karpathy's llama2.c model
--llama2c-output-model FNAME model path to save the converted llama2.c model (default ak_llama_model.bin')
```
An example command using a model from [karpathy/tinyllamas](https://huggingface.co/karpathy/tinyllamas) is as follows:
-`$ ./convert-llama2c-to-ggml --copy-vocab-from-model ../llama2.c/tokenizer.bin --llama2c-model stories42M.bin --llama2c-output-model stories42M.ggmlv3.bin`
-
-For now the generated model is in the legacy GGJTv3 format, so you need to convert it to gguf manually:
-
-`$ python ./convert-llama-ggmlv3-to-gguf.py --eps 1e-5 --input stories42M.ggmlv3.bin --output stories42M.gguf.bin`
+`$ ./convert-llama2c-to-ggml --copy-vocab-from-model llama-2-7b-chat.gguf.q2_K.bin --llama2c-model stories42M.bin --llama2c-output-model stories42M.gguf.bin`
Now you can use the model with a command like: