summaryrefslogtreecommitdiff
path: root/ggml
diff options
context:
space:
mode:
authorNexes the Elder <124105151+Nexesenex@users.noreply.github.com>2025-05-24 10:49:10 +0200
committerGitHub <noreply@github.com>2025-05-24 11:49:10 +0300
commitc7ecd4e23acb42f1150abf0b118e0a2c7b8dc959 (patch)
tree6c619eb2d01abd3435f53bb092209935b252c8bb /ggml
parenta2c42f9985a96abc8b1b4104b0524ea4b2da9363 (diff)
Legacy quants conversion schemes in convert_hf_to_gguf.py (#449)
* Legacy quants conversion schemes in convert_hf_to_gguf.py This, notably in order to make smaller conversions to generate an iMatrix file. `Q4_0`,`Q4_1` are here using embeddings, output, attn_k and attn_v in q5_0. `Q5_0`,`Q5_1` are here using embeddings, output, attn_k and attn_v in q8_0. Adapted from the following llama.cpp mainline PR : https://github.com/ggml-org/llama.cpp/pull/9022 Original author @chentyjpm Also, 2 forgotten mentions of FTYPE IQ3_KL in llama.cpp file. * forgotten IQ5_KS case mention
Diffstat (limited to 'ggml')
-rw-r--r--ggml/src/ggml-cuda/mmvq.cu1
1 files changed, 1 insertions, 0 deletions
diff --git a/ggml/src/ggml-cuda/mmvq.cu b/ggml/src/ggml-cuda/mmvq.cu
index 30a6a58b..89b74f4b 100644
--- a/ggml/src/ggml-cuda/mmvq.cu
+++ b/ggml/src/ggml-cuda/mmvq.cu
@@ -652,6 +652,7 @@ bool ggml_cuda_mmvq_type_supported(ggml_type src0_type) {
case GGML_TYPE_IQ4_KSS:
case GGML_TYPE_IQ2_KS:
case GGML_TYPE_IQ5_K:
+ case GGML_TYPE_IQ5_KS:
case GGML_TYPE_IQ6_K:
case GGML_TYPE_IQ3_S:
return true;