summaryrefslogtreecommitdiff
path: root/convert_hf_to_gguf_update.py
diff options
context:
space:
mode:
authorubergarm <leimgrub@gmail.com>2025-07-15 13:54:04 -0400
committerGitHub <noreply@github.com>2025-07-15 19:54:04 +0200
commit13b2f193723486f46efe34297cf797186ab14bc2 (patch)
treebda8a4b50adb20a564302e16dc42bed45ea798d4 /convert_hf_to_gguf_update.py
parent2081b3fccb9923699bf4d5e926d8719fc1d12c39 (diff)
kimi-k2 convert script and chat template (#612)
* convert_hf_to_gguf for Kimi-K2-Instruct Adapt mainline `PR14653` for tokenizer while maintaining proper MLA tensors. Tested with this workflow using deepseek fp8_cast_bf16.py and triton-cpu to upcast the fp8 safetensors to bf16 safetensors then used this convert_hf_to_gguf. * Add Kimi-K2 chat template moonshotai/Kimi-K2-Instruct https://github.com/ikawrakow/ik_llama.cpp/pull/609#issuecomment-3071259454 * kimi-k2 add ass to template to get response
Diffstat (limited to 'convert_hf_to_gguf_update.py')
-rwxr-xr-xconvert_hf_to_gguf_update.py1
1 files changed, 1 insertions, 0 deletions
diff --git a/convert_hf_to_gguf_update.py b/convert_hf_to_gguf_update.py
index f2e6cc37..d6541987 100755
--- a/convert_hf_to_gguf_update.py
+++ b/convert_hf_to_gguf_update.py
@@ -96,6 +96,7 @@ models = [
{"name": "smollm", "tokt": TOKENIZER_TYPE.BPE, "repo": "https://huggingface.co/HuggingFaceTB/SmolLM-135M", },
{"name": "deepseek-v3", "tokt": TOKENIZER_TYPE.BPE, "repo": "https://huggingface.co/deepseek-ai/DeepSeek-V3"},
{"name": "seed-coder", "tokt": TOKENIZER_TYPE.BPE, "repo": "https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Base", },
+ {"name": "kimi-k2", "tokt": TOKENIZER_TYPE.BPE, "repo": "https://huggingface.co/moonshotai/Kimi-K2-Base", "chkhsh": "81212dc7cdb7e0c1074ca62c5aeab0d43c9f52b8a737be7b12a777c953027890", },
]