diff options
author | Lyle Dean <dean@lyle.dev> | 2024-05-05 06:21:46 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-05-05 08:21:46 +0300 |
commit | ca3632602091e959ed2ad4c09c67a7c790b10d31 (patch) | |
tree | 90e187c10070dda11034308179421b8efe76b21e | |
parent | 889bdd76866ea31a7625ec2dcea63ff469f3e981 (diff) |
readme : add note that LLaMA 3 is not supported with convert.py (#7065)
-rw-r--r-- | README.md | 2 |
1 files changed, 2 insertions, 0 deletions
@@ -712,6 +712,8 @@ Building the program with BLAS support may lead to some performance improvements To obtain the official LLaMA 2 weights please see the <a href="#obtaining-and-using-the-facebook-llama-2-model">Obtaining and using the Facebook LLaMA 2 model</a> section. There is also a large selection of pre-quantized `gguf` models available on Hugging Face. +Note: `convert.py` does not support LLaMA 3, you can use `convert-hf-to-gguf.py` with LLaMA 3 downloaded from Hugging Face. + ```bash # obtain the official LLaMA model weights and place them in ./models ls ./models |