index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml
/
src
Age
Commit message (
Expand
)
Author
2025-04-30
Fix IQK_FA_ALL_QUANTS on AVX2 (#360)
Kawrakow
2025-04-29
CPU FA improvements (#351)
Kawrakow
2025-04-26
Fix division by zero bug (#349)
Kawrakow
2025-04-26
Add support for Cohere2 (#341)
Kawrakow
2025-04-25
Fix q4_1 and q5_1 on Arm (#348)
Kawrakow
2025-04-25
Add ability to manually set arch flags (#347)
Kawrakow
2025-04-25
Fix FA on ARM (#346)
Kawrakow
2025-04-24
cuda: use switch in constexpr funcs (#343)
Kawrakow
2025-04-21
Fix termux/android build (#336)
saood06
2025-04-17
Better TG performance for GQA models (CPU) (#332)
Kawrakow
2025-04-15
Better gemm/gemv on AVX2 fr q4_0_r8 (#331)
Kawrakow
2025-04-15
Allow q8_0 KV cache for head size 256 (#330)
Kawrakow
2025-04-13
Improved IQ1_M quantization (#327)
Kawrakow
2025-04-07
Better iq2_xs quantization (#312)
Kawrakow
2025-04-07
Add copyright notices (#317)
Kawrakow
2025-04-05
We need to synchronize before using device to host async memcpy (#313)
Kawrakow
2025-04-04
Add -flax-vector-conversions for GCC on ARM (#311)
Kawrakow
2025-04-03
Metal: FA and FlashMLA (#310)
Kawrakow
2025-04-03
Fix GCC compilation errors on ARM (#309)
Kawrakow
2025-04-03
Metal: much faster MoE prompt processing (#307)
Kawrakow
2025-04-01
Fix ARM_NEON build failure due to q8_2 (#303)
Kawrakow
2025-04-01
Quantization improvements (2) (#302)
Kawrakow
2025-04-01
Fix #300 (#301)
Kawrakow
2025-03-29
Quantization improvements (#295)
Kawrakow
2025-03-27
Use bf16 instead of fp16 block scales for q8_1 (#292)
Kawrakow
2025-03-25
CUDA: better MoE implementation (#283)
Kawrakow
2025-03-23
Improve DeepSeek batched processing speed (#282)
Kawrakow
2025-03-23
Attempt to improve FlashMLA on the CPU (#277)
Kawrakow
2025-03-22
Native build ooption for CUDA when GGML_NATIVE is set (#280)
Kawrakow
2025-03-22
Fighting with cmake (#279)
Kawrakow
2025-03-21
Convert models to row-interleaved quants using the quantize tool (#272)
Kawrakow
2025-03-19
Fix ggml_compute_forward_dup_q (#269)
Kawrakow
2025-03-19
Prevent FlashMLA-1 from running on CUDA (#268)
Kawrakow
2025-03-18
Allow q8_0 cache on the CPU for FlashMLA-2 (#265)
Kawrakow
2025-03-18
Make Q8_0 KV cache work with mla=2,fa on CUDA (#264)
Kawrakow
2025-03-18
Fix #261 (#262)
Kawrakow
2025-03-18
Compile time option to use bf16 for qunts without MMQ kernels (#261)
Kawrakow
2025-03-18
FlashMLA-2: reduce compute buffer size (CUDA and CPU) (#260)
Kawrakow
2025-03-13
FlashMLA-2 (CPU): faster and smaller compute buffer size (#253)
Kawrakow
2025-03-12
MLA-2: Allow usage of q8_0 for KV cache on CUDA (#252)
Kawrakow
2025-03-10
DeepSeek imatrix stuff (#250)
Kawrakow
2025-03-10
Faster MoE token generation on CUDA (#248)
Kawrakow
2025-03-08
Faster FlashMLA prompt processing (#246)
Kawrakow
2025-03-07
Better FlashMLA (#243)
Kawrakow
2025-03-05
DeepSeek CUDA Flash Attention (#241)
Kawrakow
2025-03-03
Flash MLA (CPU only) (#240)
Kawrakow
2025-03-02
SER - Smart Expert Reduction (#239)
Kawrakow
2025-03-01
A better way to measure the cost of ggml_barrier (#238)
Kawrakow
2025-03-01
Reduce size of compute buffers (#237)
Kawrakow
2025-02-27
Option to use MLA without a transposed cache (#235)
Kawrakow
[next]