diff options
author | Galunid <karolek1231456@gmail.com> | 2024-05-30 13:40:00 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-05-30 21:40:00 +1000 |
commit | 9c4c9cc83f7297a10bb3b2af54a22ac154fd5b20 (patch) | |
tree | 81d201275f61a8e956d36d4cb5514c27332bbe05 /docs/HOWTO-add-model.md | |
parent | 59b0d077662fab430446b3119fa142f3291c45b2 (diff) |
Move convert.py to examples/convert-legacy-llama.py (#7430)
* Move convert.py to examples/convert-no-torch.py
* Fix CI, scripts, readme files
* convert-no-torch -> convert-legacy-llama
* Move vocab thing to vocab.py
* Fix convert-no-torch -> convert-legacy-llama
* Fix lost convert.py in ci/run.sh
* Fix imports
* Fix gguf not imported correctly
* Fix flake8 complaints
* Fix check-requirements.sh
* Get rid of ADDED_TOKENS_FILE, FAST_TOKENIZER_FILE
* Review fixes
Diffstat (limited to 'docs/HOWTO-add-model.md')
-rw-r--r-- | docs/HOWTO-add-model.md | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/HOWTO-add-model.md b/docs/HOWTO-add-model.md index 48769cdf..13812424 100644 --- a/docs/HOWTO-add-model.md +++ b/docs/HOWTO-add-model.md @@ -17,7 +17,7 @@ Also, it is important to check that the examples and main ggml backends (CUDA, M ### 1. Convert the model to GGUF This step is done in python with a `convert` script using the [gguf](https://pypi.org/project/gguf/) library. -Depending on the model architecture, you can use either [convert.py](../convert.py) or [convert-hf-to-gguf.py](../convert-hf-to-gguf.py). +Depending on the model architecture, you can use either [convert-hf-to-gguf.py](../convert-hf-to-gguf.py) or [examples/convert-legacy-llama.py](../examples/convert-legacy-llama.py) (for `llama/llama2` models in `.pth` format). The convert script reads the model configuration, tokenizer, tensor names+data and converts them to GGUF metadata and tensors. |