diff options
author | Kawrakow <iwankawrakow@gmail.com> | 2025-02-05 13:49:39 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2025-02-05 13:49:39 +0200 |
commit | 8b7536bda8b65107794c4df710f14ddfde430160 (patch) | |
tree | 97a9dea70458bddcef51c734e22026ac51b51ed7 /ggml/include/ggml.h | |
parent | ecf111a11ca56ff0731308f94bd6c5e96658b6ef (diff) |
IQ1_S_R4: better 1.5 bpw quants (#185)
* iq1_s_r4: basics - quantize/dequantize
* iq1_s_r4: gemm/gemv works on AVX2/Zen4
* Don't forget to make sure we have a multiple of 4 rows per thread
* iq1_s_r4: this is better
* iq1_s_r4: fix Zen4 after AVX2 changes
* iq1_s_r4: NEON gemm/gemv
* iq1_s_r4: more bits for shared experts
With this mix we arrive at PPL(512) = 9.4140
for Deepseek-Lite using 1.766 bpw for the repeating layers.
On the Ryzen-7950X we get PP-512 = 494 t/s and
TG-128 = 52 t/s @ 16 threads.
* Forgotten counter increment
* iq1_s_r4: slightly faster AVX2/Zen4 gemm/gemv
* Compiler warnings
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'ggml/include/ggml.h')
-rw-r--r-- | ggml/include/ggml.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/ggml/include/ggml.h b/ggml/include/ggml.h index 5eea7dcd..9668dc32 100644 --- a/ggml/include/ggml.h +++ b/ggml/include/ggml.h @@ -427,6 +427,7 @@ extern "C" { GGML_TYPE_IQ2_XXS_R4= 216, GGML_TYPE_IQ2_XS_R4 = 217, GGML_TYPE_IQ3_XXS_R4= 218, + GGML_TYPE_IQ1_S_R4 = 219, GGML_TYPE_IQ4_NL_R4 = 220, GGML_TYPE_IQ3_S_R4 = 221, GGML_TYPE_IQ2_S_R4 = 222, @@ -510,6 +511,7 @@ extern "C" { GGML_FTYPE_MOSTLY_IQ2_XXS_R4= 215, // except 1d tensors GGML_FTYPE_MOSTLY_IQ2_XS_R4 = 216, // except 1d tensors GGML_FTYPE_MOSTLY_IQ3_XXS_R4= 217, // except 1d tensors + GGML_FTYPE_MOSTLY_IQ1_S_R4 = 218, // except 1d tensors GGML_FTYPE_MOSTLY_IQ4_NL_R4 = 219, // except 1d tensors GGML_FTYPE_MOSTLY_IQ3_S_R4 = 220, // except 1d tensors GGML_FTYPE_MOSTLY_IQ2_S_R4 = 221, // except 1d tensors |