index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml
/
src
/
ggml.c
Age
Commit message (
Expand
)
Author
2025-04-29
CPU FA improvements (#351)
Kawrakow
2025-04-26
Fix division by zero bug (#349)
Kawrakow
2025-04-26
Add support for Cohere2 (#341)
Kawrakow
2025-04-25
Fix q4_1 and q5_1 on Arm (#348)
Kawrakow
2025-04-17
Better TG performance for GQA models (CPU) (#332)
Kawrakow
2025-03-27
Use bf16 instead of fp16 block scales for q8_1 (#292)
Kawrakow
2025-03-23
Attempt to improve FlashMLA on the CPU (#277)
Kawrakow
2025-03-21
Convert models to row-interleaved quants using the quantize tool (#272)
Kawrakow
2025-03-19
Fix ggml_compute_forward_dup_q (#269)
Kawrakow
2025-03-18
Allow q8_0 cache on the CPU for FlashMLA-2 (#265)
Kawrakow
2025-03-13
FlashMLA-2 (CPU): faster and smaller compute buffer size (#253)
Kawrakow
2025-03-10
DeepSeek imatrix stuff (#250)
Kawrakow
2025-03-08
Faster FlashMLA prompt processing (#246)
Kawrakow
2025-03-07
Better FlashMLA (#243)
Kawrakow
2025-03-03
Flash MLA (CPU only) (#240)
Kawrakow
2025-03-02
SER - Smart Expert Reduction (#239)
Kawrakow
2025-03-01
A better way to measure the cost of ggml_barrier (#238)
Kawrakow
2025-03-01
Reduce size of compute buffers (#237)
Kawrakow
2025-02-25
Give the user the option to override where model weights are stored (#232)
Kawrakow
2025-02-23
Fused MoE ffn_up and ffn_gate (#229)
Kawrakow
2025-02-22
Fuse MoE up and gate matrix multiplications (#219)
Kawrakow
2025-02-21
Hopefully this really fixes the confusion between AVX512 and FANCY_SIMD (#216)
Kawrakow
2025-02-19
Q8_KV: 8-bit quantization type targeting the KV cache (#208)
Kawrakow
2025-02-15
Bug fix in activation quantization
Iwan Kawrakow
2025-02-15
Moving 4D gemm logic from ggml.c to iqk_mul_mat.cpp (#207)
Kawrakow
2025-02-11
DeepSeek FA support (CPU only) (#200)
Kawrakow
2025-02-09
Add optional MLA (#188)
Kawrakow
2025-02-09
Use Q8_K_128 for IQ1_S_R4 and IQ1_M_R4 matrix multiplications (#194)
Kawrakow
2025-02-08
Revert #79 (#192)
Kawrakow
2025-02-06
Rename q4_0_r4, q8_0_r4 and iq4_xs_r4 to _r8 (#189)
Kawrakow
2025-02-06
IQ1_M_R4: better 1.75 bpw quants (#187)
Kawrakow
2025-02-05
IQ1_S_R4: better 1.5 bpw quants (#185)
Kawrakow
2025-01-20
More Flash Attention improvements (#173)
Kawrakow
2025-01-15
CPU Flash Attention improvements (#172)
Kawrakow
2025-01-12
Fix the strange FA behavior with odd/even batch sizes (#171)
Kawrakow
2025-01-10
Be able to re-quantize MS BitNet I2_S models (#169)
Kawrakow
2024-12-23
IQ3_S_R4 (#162)
Kawrakow
2024-12-21
IQ2_S_R4 (#156)
Kawrakow
2024-12-21
IQ2_XS_R4 (#155)
Kawrakow
2024-12-20
IQ2_XXS_R4 (#154)
Kawrakow
2024-12-20
fix typo (#151)
Nexes the Elder
2024-12-20
IQ3_XXS_R4 (#153)
Kawrakow
2024-12-18
IQ4_KS_R4 (#150)
Kawrakow
2024-12-18
IQ5_K_R4 (#149)
Kawrakow
2024-12-17
IQ2_K_R4 (#146)
Kawrakow
2024-12-17
IQ3_K_R4 (#145)
Kawrakow
2024-12-15
BF16_R16 - 16 interleaved bf16 rows (#142)
Kawrakow
2024-12-14
Q8_K_R8: Fastest quantized matrix multiplications (#141)
Kawrakow
2024-12-12
IQ4_K_R4 (#138)
Kawrakow
2024-12-11
Q2_K_R4 (#136)
Kawrakow
[next]