summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorKawrakow <iwankawrakow@gmail.com>2025-02-06 14:08:52 +0200
committerGitHub <noreply@github.com>2025-02-06 14:08:52 +0200
commit7f61b3068e18728e5e7e2b95546ff03dd2fd41ac (patch)
treef175a942a6ebd2d2d8b08c46fa71d9f6fbad50e7 /include
parenta6f9f2ec9af92b5a13f035db054aac2fd2efaee7 (diff)
IQ1_M_R4: better 1.75 bpw quants (#187)
* iq1_m_r4: basics (quantize/dequantize) * iq1_m_r4: Zen4 gemm * iq1_m_r4: neon gemm * iq1_m_r4: switch to q8_0_x4 also on AVX2/Zen4 With the deltas being per group of 8, we cannot make use of the q8 sums stored in q8_1, so we get a tiny gain by using q8_0_x4. * iq1_m_r4: rename mul_mat_iq1_m_r4_q8_1 to mul_mat_iq1_m_r4_q8_0 --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'include')
-rw-r--r--include/llama.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/llama.h b/include/llama.h
index 0f6d15ac..3f25b296 100644
--- a/include/llama.h
+++ b/include/llama.h
@@ -197,6 +197,7 @@ extern "C" {
LLAMA_FTYPE_MOSTLY_IQ3_S_R4 = 226, // except 1d tensors
LLAMA_FTYPE_MOSTLY_IQ2_M_R4 = 229, // except 1d tensors
LLAMA_FTYPE_MOSTLY_IQ4_XS_R4 = 230, // except 1d tensors
+ LLAMA_FTYPE_MOSTLY_IQ1_M_R4 = 231, // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q6_0_R4 = 335, // except 1d tensors
LLAMA_FTYPE_MOSTLY_BF16_R16 = 232, // except 1d tensors
LLAMA_FTYPE_MOSTLY_IQ2_BN_R4 = 337, // except 1d tensors