diff options
author | Kawrakow <iwankawrakow@gmail.com> | 2025-05-15 16:02:39 +0300 |
---|---|---|
committer | GitHub <noreply@github.com> | 2025-05-15 16:02:39 +0300 |
commit | 3d92d7f802b332927669f01bfa51ebbb56e868ba (patch) | |
tree | c3913f67e36492c723cc47fe512078ee0dd19d59 /include | |
parent | 3f8c865b920df844ba0cb4ba53c1ccce8874b045 (diff) |
Adding IQ5_KS - 5.25 bpw quants (#422)
* iq5_ks: basics
* iq5_ks: quantize
* iq5_ks: CUDA dequantize works
* iq5_ks: dot product works on CUDA
* iq5_ks: MMQ works
* iq5_ks: Zen4
* iq5_ks: AVX2
But is is not quite right, just like iq4_k, iq5_k, iq6_k, iq4_ks.
All these need fixing on AVX2.
* iq5_ks: NEON
* iq5_ks: Metal dequantize
* iq5_ks: Metal dot product
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'include')
-rw-r--r-- | include/llama.h | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/include/llama.h b/include/llama.h index 0f3ae862..98b08bbd 100644 --- a/include/llama.h +++ b/include/llama.h @@ -193,6 +193,7 @@ extern "C" { LLAMA_FTYPE_MOSTLY_IQ2_KS = 147, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ4_KSS = 148, // except 1d tensors LLAMA_FTYPE_MOSTLY_Q8_KV = 149, // except 1d tensors + LLAMA_FTYPE_MOSTLY_IQ5_KS = 150, // except 1d tensors // LLAMA_FTYPE_MOSTLY_Q4_0_R8 = 202, // except 1d tensors LLAMA_FTYPE_MOSTLY_Q8_0_R8 = 207, // except 1d tensors @@ -231,7 +232,7 @@ extern "C" { LLAMA_ROPE_SCALING_TYPE_LINEAR = 1, LLAMA_ROPE_SCALING_TYPE_YARN = 2, LLAMA_ROPE_SCALING_TYPE_LONGROPE = 3, - LLAMA_ROPE_SCALING_TYPE_MAX_VALUE = LLAMA_ROPE_SCALING_TYPE_LONGROPE, + LLAMA_ROPE_SCALING_TYPE_MAX_VALUE = LLAMA_ROPE_SCALING_TYPE_LONGROPE, }; enum llama_pooling_type { |