index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2025-05-03
Trying to fix iq1_s_r4/iq1_m_r4 quantization failure (#368)
Kawrakow
2025-05-02
Fix FA bug on AVX2 (#364)
Kawrakow
2025-05-02
Fix model architecture name (#366)
saood06
2025-04-30
Update README.md (#352)
Kawrakow
2025-04-30
Fix IQK_FA_ALL_QUANTS on AVX2 (#360)
Kawrakow
2025-04-29
Add missing enum values for qwen3 and qwen3moe (#356)
Kawrakow
2025-04-29
Apply Qwen3 PR from llama.cpp (#355)
Ben Harris
2025-04-29
Update AUTHORS
Kawrakow
2025-04-29
CPU FA improvements (#351)
Kawrakow
2025-04-26
Add GLM-4-0414 Model Support (#344)
ubergarm
2025-04-26
Fix division by zero bug (#349)
Kawrakow
2025-04-26
Add support for Cohere2 (#341)
Kawrakow
2025-04-25
Fix q4_1 and q5_1 on Arm (#348)
Kawrakow
2025-04-25
Add ability to manually set arch flags (#347)
Kawrakow
2025-04-25
Fix FA on ARM (#346)
Kawrakow
2025-04-25
Fix LLaMA-4 attention (#342)
Kawrakow
2025-04-24
cuda: use switch in constexpr funcs (#343)
Kawrakow
2025-04-24
Update gguf-py constants (#298)
saood06
2025-04-22
BitNet adjustments (#338)
Kawrakow
2025-04-22
Add support for bitnet2b_2501 model (#337)
saood06
2025-04-21
Fix termux/android build (#336)
saood06
2025-04-17
Better TG performance for GQA models (CPU) (#332)
Kawrakow
2025-04-15
Better gemm/gemv on AVX2 fr q4_0_r8 (#331)
Kawrakow
2025-04-15
Allow q8_0 KV cache for head size 256 (#330)
Kawrakow
2025-04-14
imatrix: collect layer influence statistics (#328)
Kawrakow
2025-04-14
Add ability to hide imatrix details in llama-quantize (#329)
Kawrakow
2025-04-13
Improved IQ1_M quantization (#327)
Kawrakow
2025-04-12
Fix KLD precision (#325)
Kawrakow
2025-04-11
Correct L4 rms_norm (#324)
Kawrakow
2025-04-10
LlaMA-4 support (text only) (#321)
Kawrakow
2025-04-08
Guard against attempts to use MLA for non-MLA models (#320)
Kawrakow
2025-04-07
Update AUTHORS
Kawrakow
2025-04-07
Update AUTHORS
Kawrakow
2025-04-07
Use links for ggml/llama.cpp authors (#318)
Kawrakow
2025-04-07
Better iq2_xs quantization (#312)
Kawrakow
2025-04-07
Add copyright notices (#317)
Kawrakow
2025-04-07
Update LICENSE
Kawrakow
2025-04-05
We need to synchronize before using device to host async memcpy (#313)
Kawrakow
2025-04-04
Add -flax-vector-conversions for GCC on ARM (#311)
Kawrakow
2025-04-03
Metal: FA and FlashMLA (#310)
Kawrakow
2025-04-03
Fix GCC compilation errors on ARM (#309)
Kawrakow
2025-04-03
Metal: much faster MoE prompt processing (#307)
Kawrakow
2025-04-01
docs: update README.md (#304)
Ikko Eltociear Ashimine
2025-04-01
Fix ARM_NEON build failure due to q8_2 (#303)
Kawrakow
2025-04-01
Quantization improvements (2) (#302)
Kawrakow
2025-04-01
Additional guards for interleaved quants (#299)
Kawrakow
2025-04-01
Fix #300 (#301)
Kawrakow
2025-03-29
Quantization improvements (#295)
Kawrakow
2025-03-27
Make sure tensor row size is multiple of block size also when quantizing with...
Kawrakow
2025-03-27
Use bf16 instead of fp16 block scales for q8_1 (#292)
Kawrakow
[next]