summaryrefslogtreecommitdiff
path: root/src/llama-vocab.cpp
diff options
context:
space:
mode:
authorKawrakow <iwankawrakow@gmail.com>2025-06-21 16:35:08 +0200
committerGitHub <noreply@github.com>2025-06-21 16:35:08 +0200
commit4f97409b80dffa96abe1a31d0a06e6dde78e91b7 (patch)
tree533bcb5cc7cc0bccf307317ae5b8d37403fd19b5 /src/llama-vocab.cpp
parenta98b7678a305c560117ce0a63a3529f2aaa17acb (diff)
Faster ARM_NEON GEMM implementation for legacy quants (#546)
* iq2_kt and iq3_kt work with new int trellis Much slower than the fp16 based trellis. I guess, Apple doesn't have int8_t SIMD on the M2-Max GPU. * q4_0 83.6 t/s -> 128.4 t/s. q4_0_r8 is at 123.5 t/s * q5_0 74.2 t/s -> 128.5 t/s. q5_0_r4 is at 111.4 t/s. * q6_0 74.2 t/s -> 128.8 t/s. q6_0_r4 is at 107.2 t/s. * q8_0 84.5 -> 128.7 t/s. q8_0_r8 is at 131 t/s. * iq4_nl 84.5 t/s -> 128.1 t/s. iq4_nl_r4 is at 120.4 t/s * q4_1 74.4 -> 115.4 t/s. There is no repacked variant * q5_1 64.2 t/s -> 114.9 t/s. There is no repacked variant. --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'src/llama-vocab.cpp')
0 files changed, 0 insertions, 0 deletions