summaryrefslogtreecommitdiff
path: root/examples/quantize/README.md
diff options
context:
space:
mode:
authorKawrakow <48489457+ikawrakow@users.noreply.github.com>2024-07-27 07:55:01 +0200
committerGitHub <noreply@github.com>2024-07-27 07:55:01 +0200
commit154e0d75fccf1784fe9ff6fd76a630b66563da3d (patch)
tree81ce6dbb5b1900c1aa78a879f0593c694cab9d27 /examples/quantize/README.md
parent0684c3e9c70d49323b4fc517128cbe222cab7f96 (diff)
Merge mainline llama.cpp (#3)
* Merging mainline - WIP * Merging mainline - WIP AVX2 and CUDA appear to work. CUDA performance seems slightly (~1-2%) lower as it is so often the case with llama.cpp/ggml after some "improvements" have been made. * Merging mainline - fix Metal * Remove check --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'examples/quantize/README.md')
-rw-r--r--examples/quantize/README.md89
1 files changed, 86 insertions, 3 deletions
diff --git a/examples/quantize/README.md b/examples/quantize/README.md
index b78ece4e..553c2701 100644
--- a/examples/quantize/README.md
+++ b/examples/quantize/README.md
@@ -4,7 +4,89 @@ You can also use the [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-
Note: It is synced from llama.cpp `main` every 6 hours.
-## Llama 2 7B
+Example usage:
+
+```bash
+# obtain the official LLaMA model weights and place them in ./models
+ls ./models
+llama-2-7b tokenizer_checklist.chk tokenizer.model
+# [Optional] for models using BPE tokenizers
+ls ./models
+<folder containing weights and tokenizer json> vocab.json
+# [Optional] for PyTorch .bin models like Mistral-7B
+ls ./models
+<folder containing weights and tokenizer json>
+
+# install Python dependencies
+python3 -m pip install -r requirements.txt
+
+# convert the model to ggml FP16 format
+python3 convert_hf_to_gguf.py models/mymodel/
+
+# quantize the model to 4-bits (using Q4_K_M method)
+./llama-quantize ./models/mymodel/ggml-model-f16.gguf ./models/mymodel/ggml-model-Q4_K_M.gguf Q4_K_M
+
+# update the gguf filetype to current version if older version is now unsupported
+./llama-quantize ./models/mymodel/ggml-model-Q4_K_M.gguf ./models/mymodel/ggml-model-Q4_K_M-v2.gguf COPY
+```
+
+Run the quantized model:
+
+```bash
+# start inference on a gguf model
+./llama-cli -m ./models/mymodel/ggml-model-Q4_K_M.gguf -n 128
+```
+
+When running the larger models, make sure you have enough disk space to store all the intermediate files.
+
+## Memory/Disk Requirements
+
+As the models are currently fully loaded into memory, you will need adequate disk space to save them and sufficient RAM to load them. At the moment, memory and disk requirements are the same.
+
+| Model | Original size | Quantized size (Q4_0) |
+|------:|--------------:|----------------------:|
+| 7B | 13 GB | 3.9 GB |
+| 13B | 24 GB | 7.8 GB |
+| 30B | 60 GB | 19.5 GB |
+| 65B | 120 GB | 38.5 GB |
+
+## Quantization
+
+Several quantization methods are supported. They differ in the resulting model disk size and inference speed.
+
+*(outdated)*
+
+| Model | Measure | F16 | Q4_0 | Q4_1 | Q5_0 | Q5_1 | Q8_0 |
+|------:|--------------|-------:|-------:|-------:|-------:|-------:|-------:|
+| 7B | perplexity | 5.9066 | 6.1565 | 6.0912 | 5.9862 | 5.9481 | 5.9070 |
+| 7B | file size | 13.0G | 3.5G | 3.9G | 4.3G | 4.7G | 6.7G |
+| 7B | ms/tok @ 4th | 127 | 55 | 54 | 76 | 83 | 72 |
+| 7B | ms/tok @ 8th | 122 | 43 | 45 | 52 | 56 | 67 |
+| 7B | bits/weight | 16.0 | 4.5 | 5.0 | 5.5 | 6.0 | 8.5 |
+| 13B | perplexity | 5.2543 | 5.3860 | 5.3608 | 5.2856 | 5.2706 | 5.2548 |
+| 13B | file size | 25.0G | 6.8G | 7.6G | 8.3G | 9.1G | 13G |
+| 13B | ms/tok @ 4th | - | 103 | 105 | 148 | 160 | 131 |
+| 13B | ms/tok @ 8th | - | 73 | 82 | 98 | 105 | 128 |
+| 13B | bits/weight | 16.0 | 4.5 | 5.0 | 5.5 | 6.0 | 8.5 |
+
+- [k-quants](https://github.com/ggerganov/llama.cpp/pull/1684)
+- recent k-quants improvements and new i-quants
+ - [#2707](https://github.com/ggerganov/llama.cpp/pull/2707)
+ - [#2807](https://github.com/ggerganov/llama.cpp/pull/2807)
+ - [#4773 - 2-bit i-quants (inference)](https://github.com/ggerganov/llama.cpp/pull/4773)
+ - [#4856 - 2-bit i-quants (inference)](https://github.com/ggerganov/llama.cpp/pull/4856)
+ - [#4861 - importance matrix](https://github.com/ggerganov/llama.cpp/pull/4861)
+ - [#4872 - MoE models](https://github.com/ggerganov/llama.cpp/pull/4872)
+ - [#4897 - 2-bit quantization](https://github.com/ggerganov/llama.cpp/pull/4897)
+ - [#4930 - imatrix for all k-quants](https://github.com/ggerganov/llama.cpp/pull/4930)
+ - [#4951 - imatrix on the GPU](https://github.com/ggerganov/llama.cpp/pull/4957)
+ - [#4969 - imatrix for legacy quants](https://github.com/ggerganov/llama.cpp/pull/4969)
+ - [#4996 - k-qunats tuning](https://github.com/ggerganov/llama.cpp/pull/4996)
+ - [#5060 - Q3_K_XS](https://github.com/ggerganov/llama.cpp/pull/5060)
+ - [#5196 - 3-bit i-quants](https://github.com/ggerganov/llama.cpp/pull/5196)
+ - [quantization tuning](https://github.com/ggerganov/llama.cpp/pull/5320), [another one](https://github.com/ggerganov/llama.cpp/pull/5334), and [another one](https://github.com/ggerganov/llama.cpp/pull/5361)
+
+**Llama 2 7B**
| Quantization | Bits per Weight (BPW) |
|--------------|-----------------------|
@@ -18,7 +100,8 @@ Note: It is synced from llama.cpp `main` every 6 hours.
| Q5_K_M | 5.68 |
| Q6_K | 6.56 |
-## Llama 2 13B
+**Llama 2 13B**
+
Quantization | Bits per Weight (BPW)
-- | --
Q2_K | 3.34
@@ -31,7 +114,7 @@ Q5_K_S | 5.51
Q5_K_M | 5.67
Q6_K | 6.56
-# Llama 2 70B
+**Llama 2 70B**
Quantization | Bits per Weight (BPW)
-- | --