summaryrefslogtreecommitdiff
path: root/src/llama-vocab.cpp
AgeCommit message (Collapse)Author
2025-07-14Ported kimi-k2 support from llama.cpp (#609)Aleksey Nikiforov
Original patch by @gabriellarson: https://github.com/ggml-org/llama.cpp/pull/14654 Co-authored-by: anikifoss <anikifoss>
2025-07-09add hunyuan moe support for 561 (#565)ubergarm
* add hunyuan moe * Don't reshape Vcur * Apply chat template fix from mainline PR14584
2025-07-06Special handling of Seed Coder FIM tokens (#585)Fizz~
* Special handling of Seed Coder FIM tokens * vocab: Add Seed Coder pretokenizer * Formatting fix * Update llama.h
2025-06-26Add Falcon-Edge support (#555)Kawrakow
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2025-06-19add dry sampler (#513)firecoperana
* add dry sampler * use vocab instead of model in dry_init function * fix compile error for build test --------- Co-authored-by: firecoperana <firecoperana>
2025-04-10LlaMA-4 support (text only) (#321)Kawrakow
* llama4: WIP * llama4: this seems to be working --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2025-01-23Deepseek V3 support added (#176)saood06
Co-authored-by: Stanisław Szymczyk <sszymczy@gmail.com>
2024-08-12Merge mainline - Aug 12 2024 (#17)Kawrakow
* Merge mainline * Fix after merge * Remove CI check --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-27Merge mainline llama.cpp (#3)Kawrakow
* Merging mainline - WIP * Merging mainline - WIP AVX2 and CUDA appear to work. CUDA performance seems slightly (~1-2%) lower as it is so often the case with llama.cpp/ggml after some "improvements" have been made. * Merging mainline - fix Metal * Remove check --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>