index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2025-03-07
Custom quantization rules with regular expressions (#244)
Kawrakow
2025-03-05
DeepSeek CUDA Flash Attention (#241)
Kawrakow
2025-03-03
Flash MLA (CPU only) (#240)
Kawrakow
2025-03-02
SER - Smart Expert Reduction (#239)
Kawrakow
2025-03-01
A better way to measure the cost of ggml_barrier (#238)
Kawrakow
2025-03-01
Reduce size of compute buffers (#237)
Kawrakow
2025-02-27
Option to use MLA without a transposed cache (#235)
Kawrakow
2025-02-27
Faster MLA on CUDA (#234)
Kawrakow
2025-02-25
Give the user the option to override where model weights are stored (#232)
Kawrakow
2025-02-24
Fix #230 (#231)
Kawrakow
2025-02-23
Fused MoE ffn_up and ffn_gate (#229)
Kawrakow
2025-02-23
Add new sweep-bench benchmark (#225)
saood06
2025-02-23
Fix compilation error with IQK_FA_ALL_QUANTS enabled (#226)
Kawrakow
2025-02-22
Fix #217 (#220)
Kawrakow
2025-02-22
Fuse MoE up and gate matrix multiplications (#219)
Kawrakow
2025-02-22
Better strategy for attention matrix multiplications when generating tokens ...
Kawrakow
2025-02-21
Hopefully this really fixes the confusion between AVX512 and FANCY_SIMD (#216)
Kawrakow
2025-02-20
Honor attn_output specified in the command line also for low-bit quants
Iwan Kawrakow
2025-02-20
Fix NEON gemm/gemv for legacy quants when row size is not divisible by 128 (#...
Kawrakow
2025-02-20
Optimized GEMM/GEMV for IQ1_S (#212)
Kawrakow
2025-02-19
Q8_KV: 8-bit quantization type targeting the KV cache (#208)
Kawrakow
2025-02-19
Repack also experts (#210)
Kawrakow
2025-02-15
Bug fix in activation quantization
Iwan Kawrakow
2025-02-15
Moving 4D gemm logic from ggml.c to iqk_mul_mat.cpp (#207)
Kawrakow
2025-02-13
MLA: allow Q8_0 K-cache for MLA (#206)
Kawrakow
2025-02-13
Faster MLA prompt processing (#205)
Kawrakow
2025-02-12
Fix iqk_mul_mat on AVX512 systems that are missing BF16 support (#204)
Kawrakow
2025-02-12
Fix imatrix overprotectiveness (#202)
Kawrakow
2025-02-11
DeepSeek FA support (CPU only) (#200)
Kawrakow
2025-02-10
Load all MoE experts during warmup and make warmup 1 token (#198)
saood06
2025-02-09
Add optional MLA (#188)
Kawrakow
2025-02-09
FA: Add option to build all FA kernels (#197)
Kawrakow
2025-02-09
Use Q8_K_128 for IQ1_S_R4 and IQ1_M_R4 matrix multiplications (#194)
Kawrakow
2025-02-08
Revert #79 (#192)
Kawrakow
2025-02-07
cuda: non-contiguous rms norm (#190)
Kawrakow
2025-02-07
Add additional checks for iq1_s_r4 quantization (#191)
Kawrakow
2025-02-06
Rename q4_0_r4, q8_0_r4 and iq4_xs_r4 to _r8 (#189)
Kawrakow
2025-02-06
IQ1_M_R4: better 1.75 bpw quants (#187)
Kawrakow
2025-02-05
iq1_s_r4: slightly faster NEON gemm/gemv (#186)
Kawrakow
2025-02-05
IQ1_S_R4: better 1.5 bpw quants (#185)
Kawrakow
2025-01-30
Deepseek-Lite (#184)
Kawrakow
2025-01-30
Faster Q4_K_R4 and Q5_K_R4 on AVX2/Zen4 (#182)
Kawrakow
2025-01-29
Various (#181)
Kawrakow
2025-01-27
Minor performance improvements (#179)
Kawrakow
2025-01-27
Interleave 8 rows (Q8_0, IQ4_XS) (#178)
Kawrakow
2025-01-24
Update chat templates (#177)
Kawrakow
2025-01-23
Deepseek V3 support added (#176)
saood06
2025-01-23
Add Deepseek-R1-Distill pre-tokenizer
Iwan Kawrakow
2025-01-22
Better BF16 support on AVX2 (#175)
Kawrakow
2025-01-21
On Zen4 repack fp16 models to bf16_r16 when run-time-repacking is requested (...
Kawrakow
[next]