diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2024-05-06 09:31:30 +0300 |
---|---|---|
committer | Georgi Gerganov <ggerganov@gmail.com> | 2024-05-06 09:31:30 +0300 |
commit | bcdee0daa7c5e8e086b719e5eb4073b00df70e01 (patch) | |
tree | 2131db9f12d1f643c69d137a992d3f2a6e1918aa | |
parent | 628b299106d1e9476fdecb3cbe546bf5c60f1b89 (diff) |
minor : fix trailing whitespace
-rw-r--r-- | README.md | 2 |
1 files changed, 1 insertions, 1 deletions
@@ -712,7 +712,7 @@ Building the program with BLAS support may lead to some performance improvements To obtain the official LLaMA 2 weights please see the <a href="#obtaining-and-using-the-facebook-llama-2-model">Obtaining and using the Facebook LLaMA 2 model</a> section. There is also a large selection of pre-quantized `gguf` models available on Hugging Face. -Note: `convert.py` does not support LLaMA 3, you can use `convert-hf-to-gguf.py` with LLaMA 3 downloaded from Hugging Face. +Note: `convert.py` does not support LLaMA 3, you can use `convert-hf-to-gguf.py` with LLaMA 3 downloaded from Hugging Face. ```bash # obtain the official LLaMA model weights and place them in ./models |