| Age | Commit message (Expand) | Author |
|---|---|---|
| 2024-10-18 | CLI - Specify GGML_TYPE to quantize for the main tensors. (#91) | Nexes the Elder |
| 2024-10-16 | Adding IQ4_KSS: 4.0 bpw quants (#89) | Kawrakow |
| 2024-10-13 | IQ2_KS: 2.1875 bpw non-linear quantization (#85) | Kawrakow |
| 2024-10-09 | New SOTA quantization: 4.25 bpw IQ4_KS (#83) | Kawrakow |
| 2024-10-02 | Adding Q6_0 (#77) | Kawrakow |
| 2024-09-09 | Adding IQ1_TN - 1.6875 bpw for TriLM ternary models (#44) | Kawrakow |
| 2024-08-12 | Merge mainline - Aug 12 2024 (#17) | Kawrakow |
| 2024-08-09 | iq6_k: WIP (quantize/dequantize) | Iwan Kawrakow |
| 2024-08-07 | Adding IQ2_TN for use with ternary models (#13) | Kawrakow |
| 2024-08-05 | q2_K: allow it to detect ternary nets and quantize accordingly | Iwan Kawrakow |
| 2024-08-01 | iq3_k: Basics | Iwan Kawrakow |
| 2024-08-01 | iq5_k: Basics | Iwan Kawrakow |
| 2024-08-01 | iq2_k: Basics | Iwan Kawrakow |
| 2024-07-28 | IQ4_K: SOTA 4-bit quantization (#6) | Kawrakow |
| 2024-07-27 | Merge mainline llama.cpp (#3) | Kawrakow |
