summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorKawrakow <iwankawrakow@gmail.com>2024-12-09 16:59:18 +0100
committerGitHub <noreply@github.com>2024-12-09 16:59:18 +0100
commit3ec193b4856df8e5827b83a8c7686e8498c5e5b8 (patch)
tree149666dbffdf1d443bb9ff8f2564ed9bb1959201 /include
parent43e65a672a98d931998559785b58f1e980e87f54 (diff)
Q4_K_R4 (#129)
* Something is still wrong * Simply don't see what is wrong * q4_k_r4: finally works on Zen4 I had forgotten to prevent token_embd.weight being quantized with q4_k_r4! * q4_k_r4: AVX2 We get PP-512(LLaMA-3.1-8B) = 267 t/s on a Ryzen-5975WX. This is ~30% better than Q4_K_S. * q4_k_r4: NEON We get PP-512(LLaMA-3.1-8B) = 110 t/s. Not quite as good as q4_0_r4, but still a massive improvement compared to he 69 t/s for q4_K. * q4_k_r4: slightly better AVX2 PP-512 goes from 267 t/s to 282 t/s on Ryzen-5975WX * Minor * Minor --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'include')
-rw-r--r--include/llama.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/llama.h b/include/llama.h
index 9eec6a43..2fa78879 100644
--- a/include/llama.h
+++ b/include/llama.h
@@ -183,6 +183,7 @@ extern "C" {
LLAMA_FTYPE_MOSTLY_Q4_0_R4 = 202, // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q8_0_R4 = 207, // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q5_0_R4 = 208, // except 1d tensors
+ LLAMA_FTYPE_MOSTLY_Q4_K_R4 = 214, // except 1d tensors
LLAMA_FTYPE_MOSTLY_IQ4_NL_R4 = 225, // except 1d tensors
LLAMA_FTYPE_MOSTLY_IQ4_XS_R4 = 230, // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q6_0_R4 = 235, // except 1d tensors