summaryrefslogtreecommitdiff
path: root/ggml/src
AgeCommit message (Expand)Author
2025-04-03Fix GCC compilation errors on ARM (#309)Kawrakow
2025-04-03Metal: much faster MoE prompt processing (#307)Kawrakow
2025-04-01Fix ARM_NEON build failure due to q8_2 (#303)Kawrakow
2025-04-01Quantization improvements (2) (#302)Kawrakow
2025-04-01Fix #300 (#301)Kawrakow
2025-03-29Quantization improvements (#295)Kawrakow
2025-03-27Use bf16 instead of fp16 block scales for q8_1 (#292)Kawrakow
2025-03-25CUDA: better MoE implementation (#283)Kawrakow
2025-03-23Improve DeepSeek batched processing speed (#282)Kawrakow
2025-03-23Attempt to improve FlashMLA on the CPU (#277)Kawrakow
2025-03-22Native build ooption for CUDA when GGML_NATIVE is set (#280)Kawrakow
2025-03-22Fighting with cmake (#279)Kawrakow
2025-03-21Convert models to row-interleaved quants using the quantize tool (#272)Kawrakow
2025-03-19Fix ggml_compute_forward_dup_q (#269)Kawrakow
2025-03-19Prevent FlashMLA-1 from running on CUDA (#268)Kawrakow
2025-03-18Allow q8_0 cache on the CPU for FlashMLA-2 (#265)Kawrakow
2025-03-18Make Q8_0 KV cache work with mla=2,fa on CUDA (#264)Kawrakow
2025-03-18Fix #261 (#262)Kawrakow
2025-03-18Compile time option to use bf16 for qunts without MMQ kernels (#261)Kawrakow
2025-03-18FlashMLA-2: reduce compute buffer size (CUDA and CPU) (#260)Kawrakow
2025-03-13FlashMLA-2 (CPU): faster and smaller compute buffer size (#253)Kawrakow
2025-03-12MLA-2: Allow usage of q8_0 for KV cache on CUDA (#252)Kawrakow
2025-03-10DeepSeek imatrix stuff (#250)Kawrakow
2025-03-10Faster MoE token generation on CUDA (#248)Kawrakow
2025-03-08Faster FlashMLA prompt processing (#246)Kawrakow
2025-03-07Better FlashMLA (#243)Kawrakow
2025-03-05DeepSeek CUDA Flash Attention (#241)Kawrakow
2025-03-03Flash MLA (CPU only) (#240)Kawrakow
2025-03-02SER - Smart Expert Reduction (#239)Kawrakow
2025-03-01A better way to measure the cost of ggml_barrier (#238)Kawrakow
2025-03-01Reduce size of compute buffers (#237)Kawrakow
2025-02-27Option to use MLA without a transposed cache (#235)Kawrakow
2025-02-27Faster MLA on CUDA (#234)Kawrakow
2025-02-25Give the user the option to override where model weights are stored (#232)Kawrakow
2025-02-24Fix #230 (#231)Kawrakow
2025-02-23Fused MoE ffn_up and ffn_gate (#229)Kawrakow
2025-02-23Fix compilation error with IQK_FA_ALL_QUANTS enabled (#226)Kawrakow
2025-02-22Fix #217 (#220)Kawrakow
2025-02-22Fuse MoE up and gate matrix multiplications (#219)Kawrakow
2025-02-22Better strategy for attention matrix multiplications when generating tokens ...Kawrakow
2025-02-21Hopefully this really fixes the confusion between AVX512 and FANCY_SIMD (#216)Kawrakow
2025-02-20Fix NEON gemm/gemv for legacy quants when row size is not divisible by 128 (#...Kawrakow
2025-02-20Optimized GEMM/GEMV for IQ1_S (#212)Kawrakow
2025-02-19Q8_KV: 8-bit quantization type targeting the KV cache (#208)Kawrakow
2025-02-19Repack also experts (#210)Kawrakow
2025-02-15Bug fix in activation quantizationIwan Kawrakow
2025-02-15Moving 4D gemm logic from ggml.c to iqk_mul_mat.cpp (#207)Kawrakow
2025-02-12Fix iqk_mul_mat on AVX512 systems that are missing BF16 support (#204)Kawrakow
2025-02-11DeepSeek FA support (CPU only) (#200)Kawrakow
2025-02-09Add optional MLA (#188)Kawrakow