summaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2025-03-07Custom quantization rules with regular expressions (#244)Kawrakow
2025-03-05DeepSeek CUDA Flash Attention (#241)Kawrakow
2025-03-03Flash MLA (CPU only) (#240)Kawrakow
2025-03-02SER - Smart Expert Reduction (#239)Kawrakow
2025-03-01A better way to measure the cost of ggml_barrier (#238)Kawrakow
2025-03-01Reduce size of compute buffers (#237)Kawrakow
2025-02-27Option to use MLA without a transposed cache (#235)Kawrakow
2025-02-27Faster MLA on CUDA (#234)Kawrakow
2025-02-25Give the user the option to override where model weights are stored (#232)Kawrakow
2025-02-24Fix #230 (#231)Kawrakow
2025-02-23Fused MoE ffn_up and ffn_gate (#229)Kawrakow
2025-02-23Add new sweep-bench benchmark (#225)saood06
2025-02-23Fix compilation error with IQK_FA_ALL_QUANTS enabled (#226)Kawrakow
2025-02-22Fix #217 (#220)Kawrakow
2025-02-22Fuse MoE up and gate matrix multiplications (#219)Kawrakow
2025-02-22Better strategy for attention matrix multiplications when generating tokens ...Kawrakow
2025-02-21Hopefully this really fixes the confusion between AVX512 and FANCY_SIMD (#216)Kawrakow
2025-02-20Honor attn_output specified in the command line also for low-bit quantsIwan Kawrakow
2025-02-20Fix NEON gemm/gemv for legacy quants when row size is not divisible by 128 (#...Kawrakow
2025-02-20Optimized GEMM/GEMV for IQ1_S (#212)Kawrakow
2025-02-19Q8_KV: 8-bit quantization type targeting the KV cache (#208)Kawrakow
2025-02-19Repack also experts (#210)Kawrakow
2025-02-15Bug fix in activation quantizationIwan Kawrakow
2025-02-15Moving 4D gemm logic from ggml.c to iqk_mul_mat.cpp (#207)Kawrakow
2025-02-13MLA: allow Q8_0 K-cache for MLA (#206)Kawrakow
2025-02-13Faster MLA prompt processing (#205)Kawrakow
2025-02-12Fix iqk_mul_mat on AVX512 systems that are missing BF16 support (#204)Kawrakow
2025-02-12Fix imatrix overprotectiveness (#202)Kawrakow
2025-02-11DeepSeek FA support (CPU only) (#200)Kawrakow
2025-02-10 Load all MoE experts during warmup and make warmup 1 token (#198)saood06
2025-02-09Add optional MLA (#188)Kawrakow
2025-02-09FA: Add option to build all FA kernels (#197)Kawrakow
2025-02-09Use Q8_K_128 for IQ1_S_R4 and IQ1_M_R4 matrix multiplications (#194)Kawrakow
2025-02-08Revert #79 (#192)Kawrakow
2025-02-07cuda: non-contiguous rms norm (#190)Kawrakow
2025-02-07Add additional checks for iq1_s_r4 quantization (#191)Kawrakow
2025-02-06Rename q4_0_r4, q8_0_r4 and iq4_xs_r4 to _r8 (#189)Kawrakow
2025-02-06IQ1_M_R4: better 1.75 bpw quants (#187)Kawrakow
2025-02-05iq1_s_r4: slightly faster NEON gemm/gemv (#186)Kawrakow
2025-02-05IQ1_S_R4: better 1.5 bpw quants (#185)Kawrakow
2025-01-30Deepseek-Lite (#184)Kawrakow
2025-01-30Faster Q4_K_R4 and Q5_K_R4 on AVX2/Zen4 (#182)Kawrakow
2025-01-29Various (#181)Kawrakow
2025-01-27Minor performance improvements (#179)Kawrakow
2025-01-27Interleave 8 rows (Q8_0, IQ4_XS) (#178)Kawrakow
2025-01-24Update chat templates (#177)Kawrakow
2025-01-23Deepseek V3 support added (#176)saood06
2025-01-23Add Deepseek-R1-Distill pre-tokenizerIwan Kawrakow
2025-01-22Better BF16 support on AVX2 (#175)Kawrakow
2025-01-21On Zen4 repack fp16 models to bf16_r16 when run-time-repacking is requested (...Kawrakow