diff options
author | Kawrakow <iwankawrakow@gmail.com> | 2025-07-20 10:05:23 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2025-07-20 10:05:23 +0200 |
commit | f989fb03bd12752ad6e93717ca4bd298d5001d99 (patch) | |
tree | 7a127aba5c05667904b7e28a46d07c2d295ef619 /include/llama.h | |
parent | 07673c6c33753487dd054dcff37f19d93d6c56d3 (diff) |
Adding IQ1_KT - 1.75 bpw SOTA quants (#616)
* iq1_kt: basics
* iq1_kt: CUDA dequantize
Testing with LlaMA-3.1-8B-Instruct, we get almost the same PPL
as iq2_xxs, so about 0.2 bpw fewer bits for the same quality.
* iq1_kt: CUDA MMQ
* iq1_kt: CUDA MMVQ
* iq1_kt: AVX2 GEMM/GEMV
* iq1_kt: convert/repack to q8_0_r8 (AVX2)
* iq1_kt: slightly faster GEMV
18.6 t/s -> 19.4 t/s
* iq1_kt: NEON GEMM/GEMV
Pathetic as usual
* iq1_kt: slightly faster NEON - still pathetic
* iq1_kt: tiny bit better GEMV on NEON
* iq1_kt: convert/repack to q8_0_r8 (NEON)
* iq1_kt: very slightly faster convert/repack to q8_0_r8 on NEON
* Adding frgotten file
* iq1_kt: add to constants.py
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'include/llama.h')
-rw-r--r-- | include/llama.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/llama.h b/include/llama.h index bcd81f4f..1bc1bdaf 100644 --- a/include/llama.h +++ b/include/llama.h @@ -206,6 +206,7 @@ extern "C" { LLAMA_FTYPE_MOSTLY_IQ4_KT = 153, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ3_KS = 154, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ2_KL = 155, // except 1d tensors + LLAMA_FTYPE_MOSTLY_IQ1_KT = 156, // except 1d tensors // LLAMA_FTYPE_MOSTLY_Q4_0_R8 = 202, // except 1d tensors LLAMA_FTYPE_MOSTLY_Q8_0_R8 = 207, // except 1d tensors |