diff options
author | Kawrakow <iwankawrakow@gmail.com> | 2024-12-03 12:59:22 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-12-03 12:59:22 +0100 |
commit | c5bf589367cd609f4c0ff73a6534bbde7902abe8 (patch) | |
tree | fa17f82c717d535222c1843fc9fca2d66f4d6ea7 /include/llama.h | |
parent | ccec00939a30aa7762a232ac4dcadba985ef9ee4 (diff) |
Q5_0_R4 (#121)
* Adding q5_0_r4
We get PP-512(LLaMA-3.1-8B) = 256.7 t/s on a Ryzen-7950X.
We even get TG-128 improvement to 11.7 t/s from 11.1 t/s.
* q5_0_r4: NEON
We get PP-512(LLaMA-3.1-8B) = 99.6 t/s on M2-Max,
up from 71.0 t/s for Q5_0. The difference to mainline llama.cpp
is no longer funny: they get 26.5 t/s for Q5_0.
For TG, we are nor able to fully saturate memory bandwidth
and arrive at 22.1 t/s @ 8 threads. Mainline llama.cpp gets
20.6 t/s for Q5_0.
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'include/llama.h')
-rw-r--r-- | include/llama.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/llama.h b/include/llama.h index 5e935533..6d7da87a 100644 --- a/include/llama.h +++ b/include/llama.h @@ -182,6 +182,7 @@ extern "C" { // LLAMA_FTYPE_MOSTLY_Q4_0_R4 = 202, // except 1d tensors LLAMA_FTYPE_MOSTLY_Q8_0_R4 = 207, // except 1d tensors + LLAMA_FTYPE_MOSTLY_Q5_0_R4 = 208, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ4_NL_X4 = 225, // except 1d tensors LLAMA_FTYPE_GUESSED = 1024, // not specified in the model file |