index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml
/
src
Age
Commit message (
Expand
)
Author
2025-04-03
Fix GCC compilation errors on ARM (#309)
Kawrakow
2025-04-03
Metal: much faster MoE prompt processing (#307)
Kawrakow
2025-04-01
Fix ARM_NEON build failure due to q8_2 (#303)
Kawrakow
2025-04-01
Quantization improvements (2) (#302)
Kawrakow
2025-04-01
Fix #300 (#301)
Kawrakow
2025-03-29
Quantization improvements (#295)
Kawrakow
2025-03-27
Use bf16 instead of fp16 block scales for q8_1 (#292)
Kawrakow
2025-03-25
CUDA: better MoE implementation (#283)
Kawrakow
2025-03-23
Improve DeepSeek batched processing speed (#282)
Kawrakow
2025-03-23
Attempt to improve FlashMLA on the CPU (#277)
Kawrakow
2025-03-22
Native build ooption for CUDA when GGML_NATIVE is set (#280)
Kawrakow
2025-03-22
Fighting with cmake (#279)
Kawrakow
2025-03-21
Convert models to row-interleaved quants using the quantize tool (#272)
Kawrakow
2025-03-19
Fix ggml_compute_forward_dup_q (#269)
Kawrakow
2025-03-19
Prevent FlashMLA-1 from running on CUDA (#268)
Kawrakow
2025-03-18
Allow q8_0 cache on the CPU for FlashMLA-2 (#265)
Kawrakow
2025-03-18
Make Q8_0 KV cache work with mla=2,fa on CUDA (#264)
Kawrakow
2025-03-18
Fix #261 (#262)
Kawrakow
2025-03-18
Compile time option to use bf16 for qunts without MMQ kernels (#261)
Kawrakow
2025-03-18
FlashMLA-2: reduce compute buffer size (CUDA and CPU) (#260)
Kawrakow
2025-03-13
FlashMLA-2 (CPU): faster and smaller compute buffer size (#253)
Kawrakow
2025-03-12
MLA-2: Allow usage of q8_0 for KV cache on CUDA (#252)
Kawrakow
2025-03-10
DeepSeek imatrix stuff (#250)
Kawrakow
2025-03-10
Faster MoE token generation on CUDA (#248)
Kawrakow
2025-03-08
Faster FlashMLA prompt processing (#246)
Kawrakow
2025-03-07
Better FlashMLA (#243)
Kawrakow
2025-03-05
DeepSeek CUDA Flash Attention (#241)
Kawrakow
2025-03-03
Flash MLA (CPU only) (#240)
Kawrakow
2025-03-02
SER - Smart Expert Reduction (#239)
Kawrakow
2025-03-01
A better way to measure the cost of ggml_barrier (#238)
Kawrakow
2025-03-01
Reduce size of compute buffers (#237)
Kawrakow
2025-02-27
Option to use MLA without a transposed cache (#235)
Kawrakow
2025-02-27
Faster MLA on CUDA (#234)
Kawrakow
2025-02-25
Give the user the option to override where model weights are stored (#232)
Kawrakow
2025-02-24
Fix #230 (#231)
Kawrakow
2025-02-23
Fused MoE ffn_up and ffn_gate (#229)
Kawrakow
2025-02-23
Fix compilation error with IQK_FA_ALL_QUANTS enabled (#226)
Kawrakow
2025-02-22
Fix #217 (#220)
Kawrakow
2025-02-22
Fuse MoE up and gate matrix multiplications (#219)
Kawrakow
2025-02-22
Better strategy for attention matrix multiplications when generating tokens ...
Kawrakow
2025-02-21
Hopefully this really fixes the confusion between AVX512 and FANCY_SIMD (#216)
Kawrakow
2025-02-20
Fix NEON gemm/gemv for legacy quants when row size is not divisible by 128 (#...
Kawrakow
2025-02-20
Optimized GEMM/GEMV for IQ1_S (#212)
Kawrakow
2025-02-19
Q8_KV: 8-bit quantization type targeting the KV cache (#208)
Kawrakow
2025-02-19
Repack also experts (#210)
Kawrakow
2025-02-15
Bug fix in activation quantization
Iwan Kawrakow
2025-02-15
Moving 4D gemm logic from ggml.c to iqk_mul_mat.cpp (#207)
Kawrakow
2025-02-12
Fix iqk_mul_mat on AVX512 systems that are missing BF16 support (#204)
Kawrakow
2025-02-11
DeepSeek FA support (CPU only) (#200)
Kawrakow
2025-02-09
Add optional MLA (#188)
Kawrakow
[next]