diff options
author | Kawrakow <iwankawrakow@gmail.com> | 2024-10-09 12:54:40 +0300 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-10-09 12:54:40 +0300 |
commit | b30c9e10d8710a49b2d2ab98d086b9f11bfaa228 (patch) | |
tree | d2d0feb6ca78d3393a88acf81459e2f31d17c93a /examples/quantize-stats/quantize-stats.cpp | |
parent | c0ddc644bbb53d1fac10cac454756657b5f1ba32 (diff) |
New SOTA quantization: 4.25 bpw IQ4_KS (#83)
* iq4_k_xxs: basics
* WIP + adding iq3_kl quantization mix
* iq4_xxs: this looks very viable compared to iq4_xs
At the same 4.25 bpw PPL is always better, for some models
significantly better. I'll rename to iq4_ks and keep it.
* iq4_xxs: CUDA dot product
We get TG-128 = 126 t/s for LLaMA-3.1-8B, compared to 123 t/s for q4_0.
* iq4_xxs: scalar CPU dot product
Also fix the breakage I caused with the dedicated work buffer
quantization portion when the multiplication is not done
via iqk_mul_mat.
* iq4_xxs: Zen4
I noticed that iq4_xs is wrong on Zen4 (and possibly AVX2).
Again the same mistake of packing int32_t back to int16_t,
which overflows occasionally (just occasionally, that's why the
result doesn't look completely wrong, so I didn't notice).
* Fix iq4_xs (Zen4)
* iq4_xxs: AVX2
* iq4_xxs: ARM_NEON
* iq4_xxs: Metal
* iq4_xxs: slightly faster TG on Metal
* iq4_xxs: rename to iq4_ks
After all, tt is a smaller variant of iq4_k.
* iq3_kl: use iq4_ks instead of iq4_k/iq4_xs
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'examples/quantize-stats/quantize-stats.cpp')
0 files changed, 0 insertions, 0 deletions