From b30c9e10d8710a49b2d2ab98d086b9f11bfaa228 Mon Sep 17 00:00:00 2001 From: Kawrakow Date: Wed, 9 Oct 2024 12:54:40 +0300 Subject: New SOTA quantization: 4.25 bpw IQ4_KS (#83) * iq4_k_xxs: basics * WIP + adding iq3_kl quantization mix * iq4_xxs: this looks very viable compared to iq4_xs At the same 4.25 bpw PPL is always better, for some models significantly better. I'll rename to iq4_ks and keep it. * iq4_xxs: CUDA dot product We get TG-128 = 126 t/s for LLaMA-3.1-8B, compared to 123 t/s for q4_0. * iq4_xxs: scalar CPU dot product Also fix the breakage I caused with the dedicated work buffer quantization portion when the multiplication is not done via iqk_mul_mat. * iq4_xxs: Zen4 I noticed that iq4_xs is wrong on Zen4 (and possibly AVX2). Again the same mistake of packing int32_t back to int16_t, which overflows occasionally (just occasionally, that's why the result doesn't look completely wrong, so I didn't notice). * Fix iq4_xs (Zen4) * iq4_xxs: AVX2 * iq4_xxs: ARM_NEON * iq4_xxs: Metal * iq4_xxs: slightly faster TG on Metal * iq4_xxs: rename to iq4_ks After all, tt is a smaller variant of iq4_k. * iq3_kl: use iq4_ks instead of iq4_k/iq4_xs --------- Co-authored-by: Iwan Kawrakow --- include/llama.h | 2 ++ 1 file changed, 2 insertions(+) (limited to 'include/llama.h') diff --git a/include/llama.h b/include/llama.h index 43c0091e..9fb4af53 100644 --- a/include/llama.h +++ b/include/llama.h @@ -177,6 +177,8 @@ extern "C" { LLAMA_FTYPE_MOSTLY_IQ6_K = 142, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ2_TN = 143, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ1_TN = 144, // except 1d tensors + LLAMA_FTYPE_MOSTLY_IQ4_KS = 145, // except 1d tensors + LLAMA_FTYPE_MOSTLY_IQ3_KL = 146, // except 1d tensors LLAMA_FTYPE_GUESSED = 1024, // not specified in the model file }; -- cgit v1.2.3