Age | Commit message (Collapse) | Author | |
---|---|---|---|
2025-07-14 | Ported kimi-k2 support from llama.cpp (#609) | Aleksey Nikiforov | |
Original patch by @gabriellarson: https://github.com/ggml-org/llama.cpp/pull/14654 Co-authored-by: anikifoss <anikifoss> | |||
2024-07-27 | Merge mainline llama.cpp (#3) | Kawrakow | |
* Merging mainline - WIP * Merging mainline - WIP AVX2 and CUDA appear to work. CUDA performance seems slightly (~1-2%) lower as it is so often the case with llama.cpp/ggml after some "improvements" have been made. * Merging mainline - fix Metal * Remove check --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com> |