From b30c9e10d8710a49b2d2ab98d086b9f11bfaa228 Mon Sep 17 00:00:00 2001 From: Kawrakow Date: Wed, 9 Oct 2024 12:54:40 +0300 Subject: New SOTA quantization: 4.25 bpw IQ4_KS (#83) * iq4_k_xxs: basics * WIP + adding iq3_kl quantization mix * iq4_xxs: this looks very viable compared to iq4_xs At the same 4.25 bpw PPL is always better, for some models significantly better. I'll rename to iq4_ks and keep it. * iq4_xxs: CUDA dot product We get TG-128 = 126 t/s for LLaMA-3.1-8B, compared to 123 t/s for q4_0. * iq4_xxs: scalar CPU dot product Also fix the breakage I caused with the dedicated work buffer quantization portion when the multiplication is not done via iqk_mul_mat. * iq4_xxs: Zen4 I noticed that iq4_xs is wrong on Zen4 (and possibly AVX2). Again the same mistake of packing int32_t back to int16_t, which overflows occasionally (just occasionally, that's why the result doesn't look completely wrong, so I didn't notice). * Fix iq4_xs (Zen4) * iq4_xxs: AVX2 * iq4_xxs: ARM_NEON * iq4_xxs: Metal * iq4_xxs: slightly faster TG on Metal * iq4_xxs: rename to iq4_ks After all, tt is a smaller variant of iq4_k. * iq3_kl: use iq4_ks instead of iq4_k/iq4_xs --------- Co-authored-by: Iwan Kawrakow --- ggml/include/ggml.h | 2 ++ 1 file changed, 2 insertions(+) (limited to 'ggml/include') diff --git a/ggml/include/ggml.h b/ggml/include/ggml.h index 13aaeafb..3054dabd 100644 --- a/ggml/include/ggml.h +++ b/ggml/include/ggml.h @@ -403,6 +403,7 @@ extern "C" { GGML_TYPE_IQ6_K = 141, GGML_TYPE_IQ2_TN = 142, GGML_TYPE_IQ1_TN = 143, + GGML_TYPE_IQ4_KS = 144, GGML_TYPE_COUNT, }; @@ -458,6 +459,7 @@ extern "C" { GGML_FTYPE_MOSTLY_IQ6_K = 134, // except 1d tensors GGML_FTYPE_MOSTLY_IQ2_TN = 135, // except 1d tensors GGML_FTYPE_MOSTLY_IQ1_TN = 136, // except 1d tensors + GGML_FTYPE_MOSTLY_IQ4_KS = 137, // except 1d tensors }; // available tensor operations: -- cgit v1.2.3