diff options
author | Kawrakow <iwankawrakow@gmail.com> | 2025-07-20 10:05:23 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2025-07-20 10:05:23 +0200 |
commit | f989fb03bd12752ad6e93717ca4bd298d5001d99 (patch) | |
tree | 7a127aba5c05667904b7e28a46d07c2d295ef619 /ggml/src/ggml-quants.c | |
parent | 07673c6c33753487dd054dcff37f19d93d6c56d3 (diff) |
Adding IQ1_KT - 1.75 bpw SOTA quants (#616)
* iq1_kt: basics
* iq1_kt: CUDA dequantize
Testing with LlaMA-3.1-8B-Instruct, we get almost the same PPL
as iq2_xxs, so about 0.2 bpw fewer bits for the same quality.
* iq1_kt: CUDA MMQ
* iq1_kt: CUDA MMVQ
* iq1_kt: AVX2 GEMM/GEMV
* iq1_kt: convert/repack to q8_0_r8 (AVX2)
* iq1_kt: slightly faster GEMV
18.6 t/s -> 19.4 t/s
* iq1_kt: NEON GEMM/GEMV
Pathetic as usual
* iq1_kt: slightly faster NEON - still pathetic
* iq1_kt: tiny bit better GEMV on NEON
* iq1_kt: convert/repack to q8_0_r8 (NEON)
* iq1_kt: very slightly faster convert/repack to q8_0_r8 on NEON
* Adding frgotten file
* iq1_kt: add to constants.py
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'ggml/src/ggml-quants.c')
-rw-r--r-- | ggml/src/ggml-quants.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/ggml/src/ggml-quants.c b/ggml/src/ggml-quants.c index e18cee73..e49417af 100644 --- a/ggml/src/ggml-quants.c +++ b/ggml/src/ggml-quants.c @@ -15421,6 +15421,7 @@ bool ggml_validate_row_data(enum ggml_type type, const void * data, size_t nbyte case GGML_TYPE_Q6_0: break; case GGML_TYPE_IQ2_K: break; case GGML_TYPE_IQ2_KS: break; + case GGML_TYPE_IQ1_KT: break; case GGML_TYPE_IQ2_KT: break; case GGML_TYPE_IQ3_KT: break; case GGML_TYPE_IQ4_KT: break; |