diff options
author | Kawrakow <iwankawrakow@gmail.com> | 2024-12-03 06:15:29 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-12-03 06:15:29 +0100 |
commit | ccec00939a30aa7762a232ac4dcadba985ef9ee4 (patch) | |
tree | b308bd1de2b982c37fc7542374fb77efa2abb726 /examples | |
parent | 239a344f9935cb614c6208000edb0815214286a1 (diff) |
Q8_0_R4 (#120)
* Adding q8_0_r4
We get PP-512(LLaMA-3.1-8B) = 268 t/s on a Ryzen-7950X compared
to 175.6 t/s for Q8_0.
* q8_0_r4: NEON
We get PP-512(LLaMA-3.1-8B) = 112.6 t/s on M2-Max.
* q8_0_r4: Zen4 matrix-vector specialization
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'examples')
-rw-r--r-- | examples/quantize/quantize.cpp | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/examples/quantize/quantize.cpp b/examples/quantize/quantize.cpp index 9e0dc3cf..6cac41a2 100644 --- a/examples/quantize/quantize.cpp +++ b/examples/quantize/quantize.cpp @@ -42,6 +42,7 @@ static const std::vector<struct quant_option> QUANT_OPTIONS = { { "IQ4_NL", LLAMA_FTYPE_MOSTLY_IQ4_NL, " 4.50 bpw non-linear quantization", }, { "IQ4_NL_X4",LLAMA_FTYPE_MOSTLY_IQ4_NL_X4," 4.50 bpw non-linear quantization", }, { "Q4_0_R4", LLAMA_FTYPE_MOSTLY_Q4_0_R4, " 4.50 bpw non-linear quantization", }, + { "Q8_0_R4", LLAMA_FTYPE_MOSTLY_Q8_0_R4, " 8.50 bpw non-linear quantization", }, { "IQ4_XS", LLAMA_FTYPE_MOSTLY_IQ4_XS, " 4.25 bpw non-linear quantization", }, { "IQ4_KS", LLAMA_FTYPE_MOSTLY_IQ4_KS, " 4.25 bpw non-linear quantization", }, { "IQ4_KSS", LLAMA_FTYPE_MOSTLY_IQ4_KSS, " 4.0 bpw non-linear quantization", }, |