From 9c4c9cc83f7297a10bb3b2af54a22ac154fd5b20 Mon Sep 17 00:00:00 2001 From: Galunid Date: Thu, 30 May 2024 13:40:00 +0200 Subject: Move convert.py to examples/convert-legacy-llama.py (#7430) * Move convert.py to examples/convert-no-torch.py * Fix CI, scripts, readme files * convert-no-torch -> convert-legacy-llama * Move vocab thing to vocab.py * Fix convert-no-torch -> convert-legacy-llama * Fix lost convert.py in ci/run.sh * Fix imports * Fix gguf not imported correctly * Fix flake8 complaints * Fix check-requirements.sh * Get rid of ADDED_TOKENS_FILE, FAST_TOKENIZER_FILE * Review fixes --- docs/HOWTO-add-model.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'docs/HOWTO-add-model.md') diff --git a/docs/HOWTO-add-model.md b/docs/HOWTO-add-model.md index 48769cdf..13812424 100644 --- a/docs/HOWTO-add-model.md +++ b/docs/HOWTO-add-model.md @@ -17,7 +17,7 @@ Also, it is important to check that the examples and main ggml backends (CUDA, M ### 1. Convert the model to GGUF This step is done in python with a `convert` script using the [gguf](https://pypi.org/project/gguf/) library. -Depending on the model architecture, you can use either [convert.py](../convert.py) or [convert-hf-to-gguf.py](../convert-hf-to-gguf.py). +Depending on the model architecture, you can use either [convert-hf-to-gguf.py](../convert-hf-to-gguf.py) or [examples/convert-legacy-llama.py](../examples/convert-legacy-llama.py) (for `llama/llama2` models in `.pth` format). The convert script reads the model configuration, tokenizer, tensor names+data and converts them to GGUF metadata and tensors. -- cgit v1.2.3