diff options
author | Kawrakow <iwankawrakow@gmail.com> | 2025-06-21 16:35:08 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2025-06-21 16:35:08 +0200 |
commit | 4f97409b80dffa96abe1a31d0a06e6dde78e91b7 (patch) | |
tree | 533bcb5cc7cc0bccf307317ae5b8d37403fd19b5 /src/llama.cpp | |
parent | a98b7678a305c560117ce0a63a3529f2aaa17acb (diff) |
Faster ARM_NEON GEMM implementation for legacy quants (#546)
* iq2_kt and iq3_kt work with new int trellis
Much slower than the fp16 based trellis. I guess, Apple doesn't
have int8_t SIMD on the M2-Max GPU.
* q4_0
83.6 t/s -> 128.4 t/s. q4_0_r8 is at 123.5 t/s
* q5_0
74.2 t/s -> 128.5 t/s. q5_0_r4 is at 111.4 t/s.
* q6_0
74.2 t/s -> 128.8 t/s. q6_0_r4 is at 107.2 t/s.
* q8_0
84.5 -> 128.7 t/s. q8_0_r8 is at 131 t/s.
* iq4_nl
84.5 t/s -> 128.1 t/s. iq4_nl_r4 is at 120.4 t/s
* q4_1
74.4 -> 115.4 t/s. There is no repacked variant
* q5_1
64.2 t/s -> 114.9 t/s. There is no repacked variant.
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'src/llama.cpp')
-rw-r--r-- | src/llama.cpp | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/src/llama.cpp b/src/llama.cpp index c0f147b9..a70d2582 100644 --- a/src/llama.cpp +++ b/src/llama.cpp @@ -18722,7 +18722,7 @@ static std::pair<ggml_type, int> interleaved_properties(ggml_type type) { { GGML_TYPE_IQ5_KS_R4, { GGML_TYPE_IQ5_KS, 4} }, { GGML_TYPE_IQ5_K_R4, { GGML_TYPE_IQ5_K, 4} }, { GGML_TYPE_Q8_KV_R8, { GGML_TYPE_Q8_KV, 8} }, - { GGML_TYPE_Q8_K_R8, { GGML_TYPE_Q8_K, 8} }, + { GGML_TYPE_Q8_K_R8, { GGML_TYPE_Q8_0, 8} }, { GGML_TYPE_BF16_R16, { GGML_TYPE_BF16, 16} }, }; if (auto it = k_map.find(type); it != k_map.end()) return it->second; |