index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml
/
src
/
ggml-quants.c
Age
Commit message (
Expand
)
Author
2024-10-25
Bitnet changes (#106)
Kawrakow
2024-10-16
Adding IQ4_KSS: 4.0 bpw quants (#89)
Kawrakow
2024-10-13
IQ2_KS: 2.1875 bpw non-linear quantization (#85)
Kawrakow
2024-10-09
New SOTA quantization: 4.25 bpw IQ4_KS (#83)
Kawrakow
2024-10-02
Adding Q6_0 (#77)
Kawrakow
2024-10-02
iq4_nl: faster quantization (#76)
Kawrakow
2024-10-01
Fix Q5_0 flash attention (#75)
Kawrakow
2024-09-27
Adding ability to have meta data per tensor row (#61)
Kawrakow
2024-09-09
Adding IQ1_TN - 1.6875 bpw for TriLM ternary models (#44)
Kawrakow
2024-08-19
AVX2 quantization for Q8_K (#22)
Kawrakow
2024-08-12
Merge mainline - Aug 12 2024 (#17)
Kawrakow
2024-08-09
iq6_k: WIP (quantize/dequantize)
Iwan Kawrakow
2024-08-07
Adding IQ2_TN for use with ternary models (#13)
Kawrakow
2024-08-05
q2_K: allow it to detect ternary nets and quantize accordingly
Iwan Kawrakow
2024-08-01
iq3_k: Basics
Iwan Kawrakow
2024-08-01
iq5_k: Basics
Iwan Kawrakow
2024-08-01
iq2_k: Basics
Iwan Kawrakow
2024-07-28
IQ4_K: SOTA 4-bit quantization (#6)
Kawrakow
2024-07-27
Merge mainline llama.cpp (#3)
Kawrakow