summaryrefslogtreecommitdiff
path: root/examples
AgeCommit message (Expand)Author
2025-06-19add dry sampler (#513)firecoperana
2025-06-17Send [DONE] for OAI compatibility (#470)Kawrakow
2025-06-12Add top n sigma sampler and other webui fix (#512)firecoperana
2025-06-09Docs update (#509)saood06
2025-06-08Fix non rpc build error (#506)firecoperana
2025-06-08Revert "Rpc improvement (#480)"Iwan Kawrakow
2025-06-08Rpc improvement (#480)firecoperana
2025-06-08Webui improvement (#481)firecoperana
2025-06-07Add an endpoint that lists all the saved prompt caches to server (#502)saood06
2025-06-03Adding top-n-sigma sampler (#489)Kawrakow
2025-05-28set cache_prompt default to true (#465)saood06
2025-05-23Fix MSVC compilation (#448)Kawrakow
2025-05-23Fix typo in non-AVX2 code branch (#445)Kawrakow
2025-05-23Trellis quants with CPU inference (#441)Andrew Chan
2025-05-23gguf-split : update (#444)Nexes the Elder
2025-05-17IQ5_KS_R4: row-interleaved IQ5_KS (#426)Kawrakow
2025-05-15Adding IQ5_KS - 5.25 bpw quants (#422)Kawrakow
2025-05-13Fix imatrix calculation for MLA models (#411)Kawrakow
2025-05-12Add batch warmup to sweep-bench (#375)Kawrakow
2025-04-14imatrix: collect layer influence statistics (#328)Kawrakow
2025-04-14Add ability to hide imatrix details in llama-quantize (#329)Kawrakow
2025-04-12Fix KLD precision (#325)Kawrakow
2025-04-07Add copyright notices (#317)Kawrakow
2025-03-25llama-bench: enable having different number of threads for tg and pp (#284)Kawrakow
2025-03-25Update sweep bench (depracating .jsonl support) (#289)saood06
2025-03-23Test transparent huge pages on Linux (#278)Kawrakow
2025-03-21Specify tensor name regex for tensors to be repacked (#274)Kawrakow
2025-03-21Convert models to row-interleaved quants using the quantize tool (#272)Kawrakow
2025-03-18FlashMLA-2: reduce compute buffer size (CUDA and CPU) (#260)Kawrakow
2025-03-10DeepSeek imatrix stuff (#250)Kawrakow
2025-03-07Custom quantization rules with regular expressions (#244)Kawrakow
2025-03-02SER - Smart Expert Reduction (#239)Kawrakow
2025-03-01Reduce size of compute buffers (#237)Kawrakow
2025-02-27Option to use MLA without a transposed cache (#235)Kawrakow
2025-02-25Give the user the option to override where model weights are stored (#232)Kawrakow
2025-02-23Fused MoE ffn_up and ffn_gate (#229)Kawrakow
2025-02-23Add new sweep-bench benchmark (#225)saood06
2025-02-19Q8_KV: 8-bit quantization type targeting the KV cache (#208)Kawrakow
2025-02-12Fix imatrix overprotectiveness (#202)Kawrakow
2025-02-10 Load all MoE experts during warmup and make warmup 1 token (#198)saood06
2025-02-09Add optional MLA (#188)Kawrakow
2025-02-06Rename q4_0_r4, q8_0_r4 and iq4_xs_r4 to _r8 (#189)Kawrakow
2025-02-06IQ1_M_R4: better 1.75 bpw quants (#187)Kawrakow
2025-02-05IQ1_S_R4: better 1.5 bpw quants (#185)Kawrakow
2025-01-30Faster Q4_K_R4 and Q5_K_R4 on AVX2/Zen4 (#182)Kawrakow
2025-01-29Various (#181)Kawrakow
2025-01-12MoE fix for R4 quants (#170)Kawrakow
2024-12-23IQ3_S_R4 (#162)Kawrakow
2024-12-21IQ2_S_R4 (#156)Kawrakow
2024-12-21IQ2_XS_R4 (#155)Kawrakow