index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml
/
include
Age
Commit message (
Expand
)
Author
2025-02-09
Use Q8_K_128 for IQ1_S_R4 and IQ1_M_R4 matrix multiplications (#194)
Kawrakow
2025-02-08
Revert #79 (#192)
Kawrakow
2025-02-06
Rename q4_0_r4, q8_0_r4 and iq4_xs_r4 to _r8 (#189)
Kawrakow
2025-02-06
IQ1_M_R4: better 1.75 bpw quants (#187)
Kawrakow
2025-02-05
IQ1_S_R4: better 1.5 bpw quants (#185)
Kawrakow
2025-01-15
CPU Flash Attention improvements (#172)
Kawrakow
2025-01-10
Be able to re-quantize MS BitNet I2_S models (#169)
Kawrakow
2024-12-23
IQ3_S_R4 (#162)
Kawrakow
2024-12-21
IQ2_S_R4 (#156)
Kawrakow
2024-12-21
IQ2_XS_R4 (#155)
Kawrakow
2024-12-20
IQ2_XXS_R4 (#154)
Kawrakow
2024-12-20
IQ3_XXS_R4 (#153)
Kawrakow
2024-12-18
IQ4_KS_R4 (#150)
Kawrakow
2024-12-18
IQ5_K_R4 (#149)
Kawrakow
2024-12-17
IQ2_K_R4 (#146)
Kawrakow
2024-12-17
IQ3_K_R4 (#145)
Kawrakow
2024-12-15
BF16_R16 - 16 interleaved bf16 rows (#142)
Kawrakow
2024-12-14
Q8_K_R8: Fastest quantized matrix multiplications (#141)
Kawrakow
2024-12-12
IQ4_K_R4 (#138)
Kawrakow
2024-12-11
Q2_K_R4 (#136)
Kawrakow
2024-12-11
Q3_K_R4 (#134)
Kawrakow
2024-12-10
Q5_K_R4 (#132)
Kawrakow
2024-12-10
Q6_K_R4 (#130)
Kawrakow
2024-12-09
Q4_K_R4 (#129)
Kawrakow
2024-12-08
Faster IQ4_XS_R4 on Zen4 (#128)
Kawrakow
2024-12-08
Rename iq4_nl_x4 to iq4_nl_r4 (#126)
Kawrakow
2024-12-06
iq2_bn_r4: fastest Bitnet CPU implementation on the planet (#124)
Kawrakow
2024-12-04
IQ4_XS_R4 (#123)
Kawrakow
2024-12-03
Q5_0_R4 (#121)
Kawrakow
2024-12-03
Q8_0_R4 (#120)
Kawrakow
2024-12-02
Q4_0_R4 (#119)
Kawrakow
2024-12-02
IQ4_NL_X4 (#118)
Kawrakow
2024-10-31
Faster MoE inference (#112)
Kawrakow
2024-10-25
Remove forgotten IQ1_TN, IQ2_TN enum values
Iwan Kawrakow
2024-10-25
Bitnet changes (#106)
Kawrakow
2024-10-20
Avoid rebuild of GGML graph for each token (#98)
agray3
2024-10-16
Adding IQ4_KSS: 4.0 bpw quants (#89)
Kawrakow
2024-10-13
IQ2_KS: 2.1875 bpw non-linear quantization (#85)
Kawrakow
2024-10-09
New SOTA quantization: 4.25 bpw IQ4_KS (#83)
Kawrakow
2024-10-04
Do not quantize activations if not necessary (#79)
Kawrakow
2024-10-02
Fused unary(x)*y (#70)
Kawrakow
2024-10-02
Adding Q6_0 (#77)
Kawrakow
2024-09-28
Adding SWIGLU unary op (#65)
Kawrakow
2024-09-27
Adding ability to have meta data per tensor row (#61)
Kawrakow
2024-09-09
Adding IQ1_TN - 1.6875 bpw for TriLM ternary models (#44)
Kawrakow
2024-09-08
Adding fused rms_norm (#42)
Kawrakow
2024-08-27
Faster Gemma2 (#27)
Kawrakow
2024-08-20
Fused soft cap and SIMD-ified GeLU (#9)
Kawrakow
2024-08-14
Skip barriers of noops (#19)
Kawrakow
2024-08-12
Merge mainline - Aug 12 2024 (#17)
Kawrakow
[next]