summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMattheus Chediak <shammcity00@gmail.com>2024-06-06 09:17:54 -0300
committerGitHub <noreply@github.com>2024-06-06 22:17:54 +1000
commita143c04375828b1f72eb1a326115791b63e79345 (patch)
treeb64078115459396f2d26b1972d4840674445032b
parent55b2d0849d3ec9e45e4a4d9e480f5fa7977872a6 (diff)
README minor fixes (#7798) [no ci]
derievatives --> derivatives
-rw-r--r--README.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/README.md b/README.md
index 9d2a59d8..09e8cad3 100644
--- a/README.md
+++ b/README.md
@@ -598,7 +598,7 @@ Building the program with BLAS support may lead to some performance improvements
To obtain the official LLaMA 2 weights please see the <a href="#obtaining-and-using-the-facebook-llama-2-model">Obtaining and using the Facebook LLaMA 2 model</a> section. There is also a large selection of pre-quantized `gguf` models available on Hugging Face.
-Note: `convert.py` has been moved to `examples/convert-legacy-llama.py` and shouldn't be used for anything other than `Llama/Llama2/Mistral` models and their derievatives.
+Note: `convert.py` has been moved to `examples/convert-legacy-llama.py` and shouldn't be used for anything other than `Llama/Llama2/Mistral` models and their derivatives.
It does not support LLaMA 3, you can use `convert-hf-to-gguf.py` with LLaMA 3 downloaded from Hugging Face.
```bash