diff options
author | Kawrakow <iwankawrakow@gmail.com> | 2024-12-04 15:20:07 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-12-04 15:20:07 +0100 |
commit | f64de08203aaee95ca755336de3e1db85d990198 (patch) | |
tree | 9af01056e0b304ee5df5792f25d82066931eb4d6 /include | |
parent | f1f4eb988fe5ee969100cd0d3782fd7460d13949 (diff) |
IQ4_XS_R4 (#123)
* Adding iq4_xs_r4
This is a 1st working version on Zen4.
We get PP-512(LLaMA-3.1-8B) = 226 t/s, so 16% slower
than iq4_nl_x4.
* iq4_xs_r4: WIP
* iq4_xs_r4: Use AVX2 version for matrix x vector on Zen4
* iq4_xs_r4: NEON
We get PP-512(LLaMA-3.1-8B) = 115.6 t/s on M2-Max,
up from 68.2 t/s for iq4_xs!
* DRY
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'include')
-rw-r--r-- | include/llama.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/llama.h b/include/llama.h index bf843ad2..77c988a5 100644 --- a/include/llama.h +++ b/include/llama.h @@ -184,6 +184,7 @@ extern "C" { LLAMA_FTYPE_MOSTLY_Q8_0_R4 = 207, // except 1d tensors LLAMA_FTYPE_MOSTLY_Q5_0_R4 = 208, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ4_NL_X4 = 225, // except 1d tensors + LLAMA_FTYPE_MOSTLY_IQ4_XS_R4 = 230, // except 1d tensors LLAMA_FTYPE_MOSTLY_Q6_0_R4 = 235, // except 1d tensors LLAMA_FTYPE_GUESSED = 1024, // not specified in the model file |