From 910a13409463f7aedb0a92be013a1b9bb50f4859 Mon Sep 17 00:00:00 2001 From: Kawrakow Date: Sun, 13 Oct 2024 13:34:30 +0300 Subject: IQ2_KS: 2.1875 bpw non-linear quantization (#85) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Experimenting * iq2k: Try make_qx_quants for the scale Slightly better for LLaMA-3.1, Gemma-2, slightly worse for Qwen2.5 * iq2k with make_qx_quants: adjust scale * iq2ks: basics * iq2_ks: CUDA works * iq2_ks: WIP * iq2_ks: WIP * iq2_ks: Zen4 * iq2_ks: AVX2 * iq2_ks: scalar dot product * iq2_ks: ARM_NEON * iq2_ks: Metal * iq2_ks: faster Metal LLaMA-3.1-8B: PP-512 = 475.22 ± 0.37 t/s TG-128 = 45.32 ± 0.03 t/s --------- Co-authored-by: Iwan Kawrakow --- include/llama.h | 1 + 1 file changed, 1 insertion(+) (limited to 'include/llama.h') diff --git a/include/llama.h b/include/llama.h index 9fb4af53..c9387e6b 100644 --- a/include/llama.h +++ b/include/llama.h @@ -179,6 +179,7 @@ extern "C" { LLAMA_FTYPE_MOSTLY_IQ1_TN = 144, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ4_KS = 145, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ3_KL = 146, // except 1d tensors + LLAMA_FTYPE_MOSTLY_IQ2_KS = 147, // except 1d tensors LLAMA_FTYPE_GUESSED = 1024, // not specified in the model file }; -- cgit v1.2.3