diff options
author | Vaibhav Srivastav <vaibhavs10@gmail.com> | 2024-05-16 07:38:43 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-05-16 15:38:43 +1000 |
commit | ad52d5c259344888b06fd5acd3344c663dd0621d (patch) | |
tree | 14dfacc73efff2be2aa4dc757e69085a5d9b06ff | |
parent | 172b78210aae0e54d3668c5de14200efab9fac23 (diff) |
doc: add references to hugging face GGUF-my-repo quantisation web tool. (#7288)
* chore: add references to the quantisation space.
* fix grammer lol.
* Update README.md
Co-authored-by: Julien Chaumond <julien@huggingface.co>
* Update README.md
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
---------
Co-authored-by: Julien Chaumond <julien@huggingface.co>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
-rw-r--r-- | README.md | 3 | ||||
-rw-r--r-- | examples/quantize/README.md | 4 |
2 files changed, 6 insertions, 1 deletions
@@ -712,6 +712,9 @@ Building the program with BLAS support may lead to some performance improvements ### Prepare and Quantize +> [!NOTE] +> You can use the [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space on Hugging Face to quantise your model weights without any setup too. It is synced from `llama.cpp` main every 6 hours. + To obtain the official LLaMA 2 weights please see the <a href="#obtaining-and-using-the-facebook-llama-2-model">Obtaining and using the Facebook LLaMA 2 model</a> section. There is also a large selection of pre-quantized `gguf` models available on Hugging Face. Note: `convert.py` does not support LLaMA 3, you can use `convert-hf-to-gguf.py` with LLaMA 3 downloaded from Hugging Face. diff --git a/examples/quantize/README.md b/examples/quantize/README.md index 8a10365c..b78ece4e 100644 --- a/examples/quantize/README.md +++ b/examples/quantize/README.md @@ -1,6 +1,8 @@ # quantize -TODO +You can also use the [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space on Hugging Face to build your own quants without any setup. + +Note: It is synced from llama.cpp `main` every 6 hours. ## Llama 2 7B |