index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
src
Age
Commit message (
Expand
)
Author
2025-06-08
Fix non rpc build error (#506)
firecoperana
2025-06-08
Revert "Rpc improvement (#480)"
Iwan Kawrakow
2025-06-08
Rpc improvement (#480)
firecoperana
2025-06-06
Make prompt cache saving and restoring MLA aware (#497)
saood06
2025-06-03
Adding top-n-sigma sampler (#489)
Kawrakow
2025-06-03
Adding the XTC sampler (#486)
Kawrakow
2025-05-31
forgotten refs and typo (#478)
Nexes the Elder
2025-05-30
Replace MLA-specific KV cache with the standard KV cache (#469)
Kawrakow
2025-05-24
Legacy quants conversion schemes in convert_hf_to_gguf.py (#449)
Nexes the Elder
2025-05-23
Trellis quants with CPU inference (#441)
Andrew Chan
2025-05-22
Streamline a bit the quant strategies (#443)
Nexes the Elder
2025-05-17
IQ5_KS_R4: row-interleaved IQ5_KS (#426)
Kawrakow
2025-05-15
Adding IQ5_KS - 5.25 bpw quants (#422)
Kawrakow
2025-05-12
Enable faster prompt processing with mainline llama.cpp GGUFs (#409)
Kawrakow
2025-05-12
Faster DeepSeek FA on CUDA (#408)
Kawrakow
2025-05-12
GPU offload policy (#405)
Kawrakow
2025-05-09
Handle incompatible DeepSeek GGUFs (#394)
Kawrakow
2025-05-09
Support for Llama-3-Nemotron models (#377)
saood06
2025-05-02
Fix model architecture name (#366)
saood06
2025-04-29
Apply Qwen3 PR from llama.cpp (#355)
Ben Harris
2025-04-26
Add GLM-4-0414 Model Support (#344)
ubergarm
2025-04-26
Add support for Cohere2 (#341)
Kawrakow
2025-04-25
Fix LLaMA-4 attention (#342)
Kawrakow
2025-04-22
BitNet adjustments (#338)
Kawrakow
2025-04-22
Add support for bitnet2b_2501 model (#337)
saood06
2025-04-11
Correct L4 rms_norm (#324)
Kawrakow
2025-04-10
LlaMA-4 support (text only) (#321)
Kawrakow
2025-04-08
Guard against attempts to use MLA for non-MLA models (#320)
Kawrakow
2025-04-07
Add copyright notices (#317)
Kawrakow
2025-04-01
Additional guards for interleaved quants (#299)
Kawrakow
2025-03-27
Make sure tensor row size is multiple of block size also when quantizing with...
Kawrakow
2025-03-23
Improve DeepSeek batched processing speed (#282)
Kawrakow
2025-03-23
Test transparent huge pages on Linux (#278)
Kawrakow
2025-03-22
Add Gemma3 support (text only) (#276)
Kawrakow
2025-03-21
Fix bug: missing parentheses in logical expression (#275)
Kawrakow
2025-03-21
Specify tensor name regex for tensors to be repacked (#274)
Kawrakow
2025-03-21
FlashMLA-3: the best of both worlds (CPU only) (#273)
Kawrakow
2025-03-21
Convert models to row-interleaved quants using the quantize tool (#272)
Kawrakow
2025-03-19
Honor mmap setting when using tensor overrides (#270)
Kawrakow
2025-03-18
Make Q8_0 KV cache work with mla=2,fa on CUDA (#264)
Kawrakow
2025-03-18
FlashMLA-2: reduce compute buffer size (CUDA and CPU) (#260)
Kawrakow
2025-03-17
Prepare wk_b tensors of DeepSeek models on the fly (#259)
Kawrakow
2025-03-13
FlashMLA-2 (CPU): faster and smaller compute buffer size (#253)
Kawrakow
2025-03-10
DeepSeek imatrix stuff (#250)
Kawrakow
2025-03-10
Faster MoE token generation on CUDA (#248)
Kawrakow
2025-03-09
This works on CUDA, but (#247)
Kawrakow
2025-03-08
Faster FlashMLA prompt processing (#246)
Kawrakow
2025-03-07
Custom quantization rules with regular expressions (#244)
Kawrakow
2025-03-05
DeepSeek CUDA Flash Attention (#241)
Kawrakow
2025-03-03
Flash MLA (CPU only) (#240)
Kawrakow
[next]