diff options
author | Kawrakow <iwankawrakow@gmail.com> | 2024-10-01 10:56:50 +0300 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-10-01 10:56:50 +0300 |
commit | c2ff4f936a3060cb1ef6adc6e7c2664324c89d84 (patch) | |
tree | 621e9012f130f4a6d852f464b49bca9151ef372b /ggml/src/ggml.c | |
parent | 8cba4789da860d32cfc6d14f96ed37ade9e334bd (diff) |
iqk_mul_mat: better iq4_nl implementation on Zen4/AVX2 (#72)
* iqk_mul_mat: better iq4_nl implementation on Zen4/AVX2
PP-512 performance for LLaMA-3.1-8B goes to 162.6 t/s up
from 133.2 t/s.
* Fix AVX2
In addition to fixing iq4_nl, it seems I never adhusted the AVX2
implementation for iq2_tn to the block scale removal?
This commit also fixes that.
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'ggml/src/ggml.c')
-rw-r--r-- | ggml/src/ggml.c | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/ggml/src/ggml.c b/ggml/src/ggml.c index 184a31a8..ee83fc43 100644 --- a/ggml/src/ggml.c +++ b/ggml/src/ggml.c @@ -1049,7 +1049,11 @@ static const ggml_type_traits_t type_traits[GGML_TYPE_COUNT] = { .from_float = quantize_row_iq4_nl, .from_float_ref = (ggml_from_float_t)quantize_row_iq4_nl_ref, .vec_dot = ggml_vec_dot_iq4_nl_q8_0, +#if GGML_USE_IQK_MULMAT && defined __AVX2__ + .vec_dot_type = GGML_TYPE_Q8_1, +#else .vec_dot_type = GGML_TYPE_Q8_0, +#endif .nrows = 1, .row_meta_size = 0, }, |