diff options
author | Mattheus Chediak <shammcity00@gmail.com> | 2024-06-06 09:17:54 -0300 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-06-06 22:17:54 +1000 |
commit | a143c04375828b1f72eb1a326115791b63e79345 (patch) | |
tree | b64078115459396f2d26b1972d4840674445032b | |
parent | 55b2d0849d3ec9e45e4a4d9e480f5fa7977872a6 (diff) |
README minor fixes (#7798) [no ci]
derievatives --> derivatives
-rw-r--r-- | README.md | 2 |
1 files changed, 1 insertions, 1 deletions
@@ -598,7 +598,7 @@ Building the program with BLAS support may lead to some performance improvements To obtain the official LLaMA 2 weights please see the <a href="#obtaining-and-using-the-facebook-llama-2-model">Obtaining and using the Facebook LLaMA 2 model</a> section. There is also a large selection of pre-quantized `gguf` models available on Hugging Face. -Note: `convert.py` has been moved to `examples/convert-legacy-llama.py` and shouldn't be used for anything other than `Llama/Llama2/Mistral` models and their derievatives. +Note: `convert.py` has been moved to `examples/convert-legacy-llama.py` and shouldn't be used for anything other than `Llama/Llama2/Mistral` models and their derivatives. It does not support LLaMA 3, you can use `convert-hf-to-gguf.py` with LLaMA 3 downloaded from Hugging Face. ```bash |