diff options
author | Kawrakow <iwankawrakow@gmail.com> | 2025-02-05 13:49:39 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2025-02-05 13:49:39 +0200 |
commit | 8b7536bda8b65107794c4df710f14ddfde430160 (patch) | |
tree | 97a9dea70458bddcef51c734e22026ac51b51ed7 /include | |
parent | ecf111a11ca56ff0731308f94bd6c5e96658b6ef (diff) |
IQ1_S_R4: better 1.5 bpw quants (#185)
* iq1_s_r4: basics - quantize/dequantize
* iq1_s_r4: gemm/gemv works on AVX2/Zen4
* Don't forget to make sure we have a multiple of 4 rows per thread
* iq1_s_r4: this is better
* iq1_s_r4: fix Zen4 after AVX2 changes
* iq1_s_r4: NEON gemm/gemv
* iq1_s_r4: more bits for shared experts
With this mix we arrive at PPL(512) = 9.4140
for Deepseek-Lite using 1.766 bpw for the repeating layers.
On the Ryzen-7950X we get PP-512 = 494 t/s and
TG-128 = 52 t/s @ 16 threads.
* Forgotten counter increment
* iq1_s_r4: slightly faster AVX2/Zen4 gemm/gemv
* Compiler warnings
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'include')
-rw-r--r-- | include/llama.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/llama.h b/include/llama.h index c21671c6..0f6d15ac 100644 --- a/include/llama.h +++ b/include/llama.h @@ -192,6 +192,7 @@ extern "C" { LLAMA_FTYPE_MOSTLY_IQ2_XXS_R4 = 219, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ2_XS_R4 = 220, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ3_XXS_R4 = 223, // except 1d tensors + LLAMA_FTYPE_MOSTLY_IQ1_S_R4 = 224, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ4_NL_R4 = 225, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ3_S_R4 = 226, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ2_M_R4 = 229, // except 1d tensors |