summaryrefslogtreecommitdiff
path: root/ggml/src
AgeCommit message (Expand)Author
2025-04-30Fix IQK_FA_ALL_QUANTS on AVX2 (#360)Kawrakow
2025-04-29CPU FA improvements (#351)Kawrakow
2025-04-26Fix division by zero bug (#349)Kawrakow
2025-04-26Add support for Cohere2 (#341)Kawrakow
2025-04-25Fix q4_1 and q5_1 on Arm (#348)Kawrakow
2025-04-25Add ability to manually set arch flags (#347)Kawrakow
2025-04-25Fix FA on ARM (#346)Kawrakow
2025-04-24cuda: use switch in constexpr funcs (#343)Kawrakow
2025-04-21Fix termux/android build (#336)saood06
2025-04-17Better TG performance for GQA models (CPU) (#332)Kawrakow
2025-04-15Better gemm/gemv on AVX2 fr q4_0_r8 (#331)Kawrakow
2025-04-15Allow q8_0 KV cache for head size 256 (#330)Kawrakow
2025-04-13Improved IQ1_M quantization (#327)Kawrakow
2025-04-07Better iq2_xs quantization (#312)Kawrakow
2025-04-07Add copyright notices (#317)Kawrakow
2025-04-05We need to synchronize before using device to host async memcpy (#313)Kawrakow
2025-04-04Add -flax-vector-conversions for GCC on ARM (#311)Kawrakow
2025-04-03Metal: FA and FlashMLA (#310)Kawrakow
2025-04-03Fix GCC compilation errors on ARM (#309)Kawrakow
2025-04-03Metal: much faster MoE prompt processing (#307)Kawrakow
2025-04-01Fix ARM_NEON build failure due to q8_2 (#303)Kawrakow
2025-04-01Quantization improvements (2) (#302)Kawrakow
2025-04-01Fix #300 (#301)Kawrakow
2025-03-29Quantization improvements (#295)Kawrakow
2025-03-27Use bf16 instead of fp16 block scales for q8_1 (#292)Kawrakow
2025-03-25CUDA: better MoE implementation (#283)Kawrakow
2025-03-23Improve DeepSeek batched processing speed (#282)Kawrakow
2025-03-23Attempt to improve FlashMLA on the CPU (#277)Kawrakow
2025-03-22Native build ooption for CUDA when GGML_NATIVE is set (#280)Kawrakow
2025-03-22Fighting with cmake (#279)Kawrakow
2025-03-21Convert models to row-interleaved quants using the quantize tool (#272)Kawrakow
2025-03-19Fix ggml_compute_forward_dup_q (#269)Kawrakow
2025-03-19Prevent FlashMLA-1 from running on CUDA (#268)Kawrakow
2025-03-18Allow q8_0 cache on the CPU for FlashMLA-2 (#265)Kawrakow
2025-03-18Make Q8_0 KV cache work with mla=2,fa on CUDA (#264)Kawrakow
2025-03-18Fix #261 (#262)Kawrakow
2025-03-18Compile time option to use bf16 for qunts without MMQ kernels (#261)Kawrakow
2025-03-18FlashMLA-2: reduce compute buffer size (CUDA and CPU) (#260)Kawrakow
2025-03-13FlashMLA-2 (CPU): faster and smaller compute buffer size (#253)Kawrakow
2025-03-12MLA-2: Allow usage of q8_0 for KV cache on CUDA (#252)Kawrakow
2025-03-10DeepSeek imatrix stuff (#250)Kawrakow
2025-03-10Faster MoE token generation on CUDA (#248)Kawrakow
2025-03-08Faster FlashMLA prompt processing (#246)Kawrakow
2025-03-07Better FlashMLA (#243)Kawrakow
2025-03-05DeepSeek CUDA Flash Attention (#241)Kawrakow
2025-03-03Flash MLA (CPU only) (#240)Kawrakow
2025-03-02SER - Smart Expert Reduction (#239)Kawrakow
2025-03-01A better way to measure the cost of ggml_barrier (#238)Kawrakow
2025-03-01Reduce size of compute buffers (#237)Kawrakow
2025-02-27Option to use MLA without a transposed cache (#235)Kawrakow