index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
Age
Commit message (
Expand
)
Author
2025-06-19
add dry sampler (#513)
firecoperana
2025-06-17
Send [DONE] for OAI compatibility (#470)
Kawrakow
2025-06-12
Add top n sigma sampler and other webui fix (#512)
firecoperana
2025-06-09
Docs update (#509)
saood06
2025-06-08
Fix non rpc build error (#506)
firecoperana
2025-06-08
Revert "Rpc improvement (#480)"
Iwan Kawrakow
2025-06-08
Rpc improvement (#480)
firecoperana
2025-06-08
Webui improvement (#481)
firecoperana
2025-06-07
Add an endpoint that lists all the saved prompt caches to server (#502)
saood06
2025-06-03
Adding top-n-sigma sampler (#489)
Kawrakow
2025-05-28
set cache_prompt default to true (#465)
saood06
2025-05-23
Fix MSVC compilation (#448)
Kawrakow
2025-05-23
Fix typo in non-AVX2 code branch (#445)
Kawrakow
2025-05-23
Trellis quants with CPU inference (#441)
Andrew Chan
2025-05-23
gguf-split : update (#444)
Nexes the Elder
2025-05-17
IQ5_KS_R4: row-interleaved IQ5_KS (#426)
Kawrakow
2025-05-15
Adding IQ5_KS - 5.25 bpw quants (#422)
Kawrakow
2025-05-13
Fix imatrix calculation for MLA models (#411)
Kawrakow
2025-05-12
Add batch warmup to sweep-bench (#375)
Kawrakow
2025-04-14
imatrix: collect layer influence statistics (#328)
Kawrakow
2025-04-14
Add ability to hide imatrix details in llama-quantize (#329)
Kawrakow
2025-04-12
Fix KLD precision (#325)
Kawrakow
2025-04-07
Add copyright notices (#317)
Kawrakow
2025-03-25
llama-bench: enable having different number of threads for tg and pp (#284)
Kawrakow
2025-03-25
Update sweep bench (depracating .jsonl support) (#289)
saood06
2025-03-23
Test transparent huge pages on Linux (#278)
Kawrakow
2025-03-21
Specify tensor name regex for tensors to be repacked (#274)
Kawrakow
2025-03-21
Convert models to row-interleaved quants using the quantize tool (#272)
Kawrakow
2025-03-18
FlashMLA-2: reduce compute buffer size (CUDA and CPU) (#260)
Kawrakow
2025-03-10
DeepSeek imatrix stuff (#250)
Kawrakow
2025-03-07
Custom quantization rules with regular expressions (#244)
Kawrakow
2025-03-02
SER - Smart Expert Reduction (#239)
Kawrakow
2025-03-01
Reduce size of compute buffers (#237)
Kawrakow
2025-02-27
Option to use MLA without a transposed cache (#235)
Kawrakow
2025-02-25
Give the user the option to override where model weights are stored (#232)
Kawrakow
2025-02-23
Fused MoE ffn_up and ffn_gate (#229)
Kawrakow
2025-02-23
Add new sweep-bench benchmark (#225)
saood06
2025-02-19
Q8_KV: 8-bit quantization type targeting the KV cache (#208)
Kawrakow
2025-02-12
Fix imatrix overprotectiveness (#202)
Kawrakow
2025-02-10
Load all MoE experts during warmup and make warmup 1 token (#198)
saood06
2025-02-09
Add optional MLA (#188)
Kawrakow
2025-02-06
Rename q4_0_r4, q8_0_r4 and iq4_xs_r4 to _r8 (#189)
Kawrakow
2025-02-06
IQ1_M_R4: better 1.75 bpw quants (#187)
Kawrakow
2025-02-05
IQ1_S_R4: better 1.5 bpw quants (#185)
Kawrakow
2025-01-30
Faster Q4_K_R4 and Q5_K_R4 on AVX2/Zen4 (#182)
Kawrakow
2025-01-29
Various (#181)
Kawrakow
2025-01-12
MoE fix for R4 quants (#170)
Kawrakow
2024-12-23
IQ3_S_R4 (#162)
Kawrakow
2024-12-21
IQ2_S_R4 (#156)
Kawrakow
2024-12-21
IQ2_XS_R4 (#155)
Kawrakow
[next]