summaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Expand)Author
2025-06-08Fix non rpc build error (#506)firecoperana
2025-06-08Revert "Rpc improvement (#480)"Iwan Kawrakow
2025-06-08Rpc improvement (#480)firecoperana
2025-06-06Make prompt cache saving and restoring MLA aware (#497)saood06
2025-06-03Adding top-n-sigma sampler (#489)Kawrakow
2025-06-03Adding the XTC sampler (#486)Kawrakow
2025-05-31forgotten refs and typo (#478)Nexes the Elder
2025-05-30Replace MLA-specific KV cache with the standard KV cache (#469)Kawrakow
2025-05-24Legacy quants conversion schemes in convert_hf_to_gguf.py (#449)Nexes the Elder
2025-05-23Trellis quants with CPU inference (#441)Andrew Chan
2025-05-22Streamline a bit the quant strategies (#443)Nexes the Elder
2025-05-17IQ5_KS_R4: row-interleaved IQ5_KS (#426)Kawrakow
2025-05-15Adding IQ5_KS - 5.25 bpw quants (#422)Kawrakow
2025-05-12Enable faster prompt processing with mainline llama.cpp GGUFs (#409)Kawrakow
2025-05-12Faster DeepSeek FA on CUDA (#408)Kawrakow
2025-05-12GPU offload policy (#405)Kawrakow
2025-05-09Handle incompatible DeepSeek GGUFs (#394)Kawrakow
2025-05-09Support for Llama-3-Nemotron models (#377)saood06
2025-05-02Fix model architecture name (#366)saood06
2025-04-29Apply Qwen3 PR from llama.cpp (#355)Ben Harris
2025-04-26Add GLM-4-0414 Model Support (#344)ubergarm
2025-04-26Add support for Cohere2 (#341)Kawrakow
2025-04-25Fix LLaMA-4 attention (#342)Kawrakow
2025-04-22BitNet adjustments (#338)Kawrakow
2025-04-22Add support for bitnet2b_2501 model (#337)saood06
2025-04-11Correct L4 rms_norm (#324)Kawrakow
2025-04-10LlaMA-4 support (text only) (#321)Kawrakow
2025-04-08Guard against attempts to use MLA for non-MLA models (#320)Kawrakow
2025-04-07Add copyright notices (#317)Kawrakow
2025-04-01Additional guards for interleaved quants (#299)Kawrakow
2025-03-27Make sure tensor row size is multiple of block size also when quantizing with...Kawrakow
2025-03-23Improve DeepSeek batched processing speed (#282)Kawrakow
2025-03-23Test transparent huge pages on Linux (#278)Kawrakow
2025-03-22Add Gemma3 support (text only) (#276)Kawrakow
2025-03-21Fix bug: missing parentheses in logical expression (#275)Kawrakow
2025-03-21Specify tensor name regex for tensors to be repacked (#274)Kawrakow
2025-03-21FlashMLA-3: the best of both worlds (CPU only) (#273)Kawrakow
2025-03-21Convert models to row-interleaved quants using the quantize tool (#272)Kawrakow
2025-03-19Honor mmap setting when using tensor overrides (#270)Kawrakow
2025-03-18Make Q8_0 KV cache work with mla=2,fa on CUDA (#264)Kawrakow
2025-03-18FlashMLA-2: reduce compute buffer size (CUDA and CPU) (#260)Kawrakow
2025-03-17Prepare wk_b tensors of DeepSeek models on the fly (#259)Kawrakow
2025-03-13FlashMLA-2 (CPU): faster and smaller compute buffer size (#253)Kawrakow
2025-03-10DeepSeek imatrix stuff (#250)Kawrakow
2025-03-10Faster MoE token generation on CUDA (#248)Kawrakow
2025-03-09This works on CUDA, but (#247)Kawrakow
2025-03-08Faster FlashMLA prompt processing (#246)Kawrakow
2025-03-07Custom quantization rules with regular expressions (#244)Kawrakow
2025-03-05DeepSeek CUDA Flash Attention (#241)Kawrakow
2025-03-03Flash MLA (CPU only) (#240)Kawrakow