summaryrefslogtreecommitdiff
path: root/ggml/src/ggml-quants.c
AgeCommit message (Expand)Author
2024-10-25Bitnet changes (#106)Kawrakow
2024-10-16Adding IQ4_KSS: 4.0 bpw quants (#89)Kawrakow
2024-10-13IQ2_KS: 2.1875 bpw non-linear quantization (#85)Kawrakow
2024-10-09New SOTA quantization: 4.25 bpw IQ4_KS (#83)Kawrakow
2024-10-02Adding Q6_0 (#77)Kawrakow
2024-10-02iq4_nl: faster quantization (#76)Kawrakow
2024-10-01Fix Q5_0 flash attention (#75)Kawrakow
2024-09-27Adding ability to have meta data per tensor row (#61)Kawrakow
2024-09-09Adding IQ1_TN - 1.6875 bpw for TriLM ternary models (#44)Kawrakow
2024-08-19AVX2 quantization for Q8_K (#22)Kawrakow
2024-08-12Merge mainline - Aug 12 2024 (#17)Kawrakow
2024-08-09iq6_k: WIP (quantize/dequantize)Iwan Kawrakow
2024-08-07Adding IQ2_TN for use with ternary models (#13)Kawrakow
2024-08-05q2_K: allow it to detect ternary nets and quantize accordinglyIwan Kawrakow
2024-08-01iq3_k: BasicsIwan Kawrakow
2024-08-01iq5_k: BasicsIwan Kawrakow
2024-08-01iq2_k: BasicsIwan Kawrakow
2024-07-28IQ4_K: SOTA 4-bit quantization (#6)Kawrakow
2024-07-27Merge mainline llama.cpp (#3)Kawrakow