index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
src
Age
Commit message (
Expand
)
Author
2025-04-01
Additional guards for interleaved quants (#299)
Kawrakow
2025-03-27
Make sure tensor row size is multiple of block size also when quantizing with...
Kawrakow
2025-03-23
Improve DeepSeek batched processing speed (#282)
Kawrakow
2025-03-23
Test transparent huge pages on Linux (#278)
Kawrakow
2025-03-22
Add Gemma3 support (text only) (#276)
Kawrakow
2025-03-21
Fix bug: missing parentheses in logical expression (#275)
Kawrakow
2025-03-21
Specify tensor name regex for tensors to be repacked (#274)
Kawrakow
2025-03-21
FlashMLA-3: the best of both worlds (CPU only) (#273)
Kawrakow
2025-03-21
Convert models to row-interleaved quants using the quantize tool (#272)
Kawrakow
2025-03-19
Honor mmap setting when using tensor overrides (#270)
Kawrakow
2025-03-18
Make Q8_0 KV cache work with mla=2,fa on CUDA (#264)
Kawrakow
2025-03-18
FlashMLA-2: reduce compute buffer size (CUDA and CPU) (#260)
Kawrakow
2025-03-17
Prepare wk_b tensors of DeepSeek models on the fly (#259)
Kawrakow
2025-03-13
FlashMLA-2 (CPU): faster and smaller compute buffer size (#253)
Kawrakow
2025-03-10
DeepSeek imatrix stuff (#250)
Kawrakow
2025-03-10
Faster MoE token generation on CUDA (#248)
Kawrakow
2025-03-09
This works on CUDA, but (#247)
Kawrakow
2025-03-08
Faster FlashMLA prompt processing (#246)
Kawrakow
2025-03-07
Custom quantization rules with regular expressions (#244)
Kawrakow
2025-03-05
DeepSeek CUDA Flash Attention (#241)
Kawrakow
2025-03-03
Flash MLA (CPU only) (#240)
Kawrakow
2025-03-02
SER - Smart Expert Reduction (#239)
Kawrakow
2025-03-01
Reduce size of compute buffers (#237)
Kawrakow
2025-02-27
Option to use MLA without a transposed cache (#235)
Kawrakow
2025-02-27
Faster MLA on CUDA (#234)
Kawrakow
2025-02-25
Give the user the option to override where model weights are stored (#232)
Kawrakow
2025-02-23
Fused MoE ffn_up and ffn_gate (#229)
Kawrakow
2025-02-20
Honor attn_output specified in the command line also for low-bit quants
Iwan Kawrakow
2025-02-19
Q8_KV: 8-bit quantization type targeting the KV cache (#208)
Kawrakow
2025-02-13
MLA: allow Q8_0 K-cache for MLA (#206)
Kawrakow
2025-02-13
Faster MLA prompt processing (#205)
Kawrakow
2025-02-11
DeepSeek FA support (CPU only) (#200)
Kawrakow
2025-02-10
Load all MoE experts during warmup and make warmup 1 token (#198)
saood06
2025-02-09
Add optional MLA (#188)
Kawrakow
2025-02-07
cuda: non-contiguous rms norm (#190)
Kawrakow
2025-02-06
Rename q4_0_r4, q8_0_r4 and iq4_xs_r4 to _r8 (#189)
Kawrakow
2025-02-06
IQ1_M_R4: better 1.75 bpw quants (#187)
Kawrakow
2025-02-05
IQ1_S_R4: better 1.5 bpw quants (#185)
Kawrakow
2025-01-30
Deepseek-Lite (#184)
Kawrakow
2025-01-27
Minor performance improvements (#179)
Kawrakow
2025-01-27
Interleave 8 rows (Q8_0, IQ4_XS) (#178)
Kawrakow
2025-01-24
Update chat templates (#177)
Kawrakow
2025-01-23
Deepseek V3 support added (#176)
saood06
2025-01-23
Add Deepseek-R1-Distill pre-tokenizer
Iwan Kawrakow
2025-01-10
Be able to re-quantize MS BitNet I2_S models (#169)
Kawrakow
2025-01-10
Falcon3 changes (#168)
Kawrakow
2024-12-23
IQ3_S_R4 (#162)
Kawrakow
2024-12-21
IQ2_S_R4 (#156)
Kawrakow
2024-12-21
IQ2_XS_R4 (#155)
Kawrakow
2024-12-20
IQ2_XXS_R4 (#154)
Kawrakow
[next]