summaryrefslogtreecommitdiff
path: root/ggml/include/ggml.h
diff options
context:
space:
mode:
authorKawrakow <iwankawrakow@gmail.com>2024-10-13 13:34:30 +0300
committerGitHub <noreply@github.com>2024-10-13 13:34:30 +0300
commit910a13409463f7aedb0a92be013a1b9bb50f4859 (patch)
tree16e13e1fd3010549877408a0a62706b2bc5d5f0c /ggml/include/ggml.h
parentc15de3654e0002537c8052fd6d52d879e778e88c (diff)
IQ2_KS: 2.1875 bpw non-linear quantization (#85)
* Experimenting * iq2k: Try make_qx_quants for the scale Slightly better for LLaMA-3.1, Gemma-2, slightly worse for Qwen2.5 * iq2k with make_qx_quants: adjust scale * iq2ks: basics * iq2_ks: CUDA works * iq2_ks: WIP * iq2_ks: WIP * iq2_ks: Zen4 * iq2_ks: AVX2 * iq2_ks: scalar dot product * iq2_ks: ARM_NEON * iq2_ks: Metal * iq2_ks: faster Metal LLaMA-3.1-8B: PP-512 = 475.22 ± 0.37 t/s TG-128 = 45.32 ± 0.03 t/s --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'ggml/include/ggml.h')
-rw-r--r--ggml/include/ggml.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/ggml/include/ggml.h b/ggml/include/ggml.h
index 3054dabd..fd7c23b9 100644
--- a/ggml/include/ggml.h
+++ b/ggml/include/ggml.h
@@ -404,6 +404,7 @@ extern "C" {
GGML_TYPE_IQ2_TN = 142,
GGML_TYPE_IQ1_TN = 143,
GGML_TYPE_IQ4_KS = 144,
+ GGML_TYPE_IQ2_KS = 145,
GGML_TYPE_COUNT,
};
@@ -460,6 +461,7 @@ extern "C" {
GGML_FTYPE_MOSTLY_IQ2_TN = 135, // except 1d tensors
GGML_FTYPE_MOSTLY_IQ1_TN = 136, // except 1d tensors
GGML_FTYPE_MOSTLY_IQ4_KS = 137, // except 1d tensors
+ GGML_FTYPE_MOSTLY_IQ2_KS = 138, // except 1d tensors
};
// available tensor operations: