diff options
author | Kawrakow <iwankawrakow@gmail.com> | 2024-12-03 14:48:26 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-12-03 14:48:26 +0100 |
commit | f1f4eb988fe5ee969100cd0d3782fd7460d13949 (patch) | |
tree | 97bb1a75ba7189f05e82835de6b2b65661a1ce7a /include/llama.h | |
parent | c5bf589367cd609f4c0ff73a6534bbde7902abe8 (diff) |
Q6_0_R4 (#122)
* Adding q6_0_r4
We get PP-512(LLaMA-3.1-8B) = 257 t/s on a Ryzen-7950X.
* q6_0_r4: NEON
We get PP-512(LLaMA-3.1-8B) = 95 t/s on M2-Max.
In terms of ops, q6_0_r4 is identical to q5_0_r4
except for loading the high bits being
vld1q_u8_x2 instead of vld1q_u8. It is strange that
this can make a 5% difference in performance, especially
considering that this is amortized (re-used) over 8 columns
in the right matrix. Or am I running out of vector registers?
* Fix AVX2
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'include/llama.h')
-rw-r--r-- | include/llama.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/llama.h b/include/llama.h index 6d7da87a..bf843ad2 100644 --- a/include/llama.h +++ b/include/llama.h @@ -184,6 +184,7 @@ extern "C" { LLAMA_FTYPE_MOSTLY_Q8_0_R4 = 207, // except 1d tensors LLAMA_FTYPE_MOSTLY_Q5_0_R4 = 208, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ4_NL_X4 = 225, // except 1d tensors + LLAMA_FTYPE_MOSTLY_Q6_0_R4 = 235, // except 1d tensors LLAMA_FTYPE_GUESSED = 1024, // not specified in the model file }; |