From 3ec193b4856df8e5827b83a8c7686e8498c5e5b8 Mon Sep 17 00:00:00 2001 From: Kawrakow Date: Mon, 9 Dec 2024 16:59:18 +0100 Subject: Q4_K_R4 (#129) * Something is still wrong * Simply don't see what is wrong * q4_k_r4: finally works on Zen4 I had forgotten to prevent token_embd.weight being quantized with q4_k_r4! * q4_k_r4: AVX2 We get PP-512(LLaMA-3.1-8B) = 267 t/s on a Ryzen-5975WX. This is ~30% better than Q4_K_S. * q4_k_r4: NEON We get PP-512(LLaMA-3.1-8B) = 110 t/s. Not quite as good as q4_0_r4, but still a massive improvement compared to he 69 t/s for q4_K. * q4_k_r4: slightly better AVX2 PP-512 goes from 267 t/s to 282 t/s on Ryzen-5975WX * Minor * Minor --------- Co-authored-by: Iwan Kawrakow --- ggml/src/ggml-quants.c | 1 + 1 file changed, 1 insertion(+) (limited to 'ggml/src/ggml-quants.c') diff --git a/ggml/src/ggml-quants.c b/ggml/src/ggml-quants.c index 7eece2b3..a4b234c5 100644 --- a/ggml/src/ggml-quants.c +++ b/ggml/src/ggml-quants.c @@ -15202,6 +15202,7 @@ bool ggml_validate_row_data(enum ggml_type type, const void * data, size_t nbyte case GGML_TYPE_Q5_0_R4: break; case GGML_TYPE_Q6_0_R4: break; case GGML_TYPE_Q8_0_R4: break; + case GGML_TYPE_Q4_K_R4: break; case GGML_TYPE_Q4_0_4_4: case GGML_TYPE_Q4_0_4_8: { -- cgit v1.2.3