index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
llama.cpp
Age
Commit message (
Expand
)
Author
2024-01-24
llama : pre-allocate input tensors in a separate buffer (#5100)
slaren
2024-01-23
minor : clean-up some warnings and style (#5094)
Georgi Gerganov
2024-01-22
llama : fix not enough space in buffer with Qwen (#5086)
slaren
2024-01-22
llama : support StableLM 2 1.6B (#5052)
compilade
2024-01-22
llama : add Q3_K_XS (#5060)
Kawrakow
2024-01-22
llama : add more qwen2 models (#5071)
Shijie
2024-01-20
llama : run all KQV ops on the CPU with no KV offload (#5049)
slaren
2024-01-19
llama : support upcoming Qwen2 (#5037)
Shijie
2024-01-19
llama : add CodeShell support (#5016)
chiranko
2024-01-19
llama : fix falcon arch for tied output embeddings (#4978)
John
2024-01-18
llama : fix mlock with no-mmap with Metal (#5025)
slaren
2024-01-17
ggml : add IQ2 to test-backend-ops + refactoring (#4990)
Georgi Gerganov
2024-01-17
backend : add eval callback (#4935)
Georgi Gerganov
2024-01-17
llama : use Q4_K for attn_v for Q2_K_S when n_gqa >= 4 (#4996)
Kawrakow
2024-01-16
ggml : importance matrix support for legacy quants (#4969)
Kawrakow
2024-01-15
llama : apply classifier-free guidance to logits directly (#4951)
David Friehs
2024-01-15
llama : check for 256 divisibility for IQ2_XS, IQ2_XXS (#4950)
Kawrakow
2024-01-14
llama : fix missing quotes (#4937)
David Pflug
2024-01-14
llama : check LLAMA_TRACE env for extra logging (#4929)
Georgi Gerganov
2024-01-14
llama : use LLAMA_LOG_ macros for logging
Georgi Gerganov
2024-01-14
Fix ffn_down quantization mix for MoE models (#4927)
Kawrakow
2024-01-14
llama : support WinXP build with MinGW 8.1.0 (#3419)
Karthik Kumar Viswanathan
2024-01-14
2-bit quantizations (#4897)
Kawrakow
2024-01-14
Make Q3_K_S be the same as olf Q3_K_L for Mixtral-8x7B (#4906)
Kawrakow
2024-01-13
metal : remove old API (#4919)
Georgi Gerganov
2024-01-13
llama : fix detokenization of non-special added-tokens (#4916)
Georgi Gerganov
2024-01-13
llama : minimize size used for state save/load (#4820)
David Friehs
2024-01-13
convert : update phi-2 to latest HF repo (#4903)
Georgi Gerganov
2024-01-12
llama : ggml-backend integration (#4766)
slaren
2024-01-12
llama : remove redundant assert for StableLM (#4901)
Georgi Gerganov
2024-01-12
llama : fix typo "imp_embd" -> "inp_embd"
Georgi Gerganov
2024-01-12
llama : fix llm_build_k_shift to use correct n_rot (#4889)
Georgi Gerganov
2024-01-11
llama : restore intended k-quants mixes for MoE models (#4872)
Kawrakow
2024-01-11
ggml : SOTA 2-bit quants (add IQ2_XS) (#4856)
Kawrakow
2024-01-11
main : print total token count and tokens consumed so far (#4874)
pudepiedj
2024-01-10
llama : add additional suffixes for model params (#4834)
Brian
2024-01-10
llama : recognize 1B phi models (#4847)
Austin
2024-01-08
SOTA 2-bit quants (#4773)
Kawrakow
2024-01-08
examples : add passkey test (#3856)
Georgi Gerganov
2024-01-07
llama : remove unused vars (#4796)
Georgi Gerganov
2024-01-07
llama : remove redundant GQA check (#4796)
Georgi Gerganov
2024-01-07
llama : print tensor meta for debugging
Georgi Gerganov
2024-01-02
llama : llama_model_desc print number of experts
Georgi Gerganov
2024-01-02
llama : replace all API facing `int`'s with `int32_t` (#4577)
Marcus Dunn
2024-01-02
llama : differentiate the KV dims in the attention (#4657)
postmasters
2023-12-30
ggml : add ggml_cpu_has_avx_vnni() (#4589)
automaticcat
2023-12-28
gpt2 : Add gpt2 architecture integration (#4555)
manikbhandari
2023-12-27
llama : add AWQ for llama, llama2, mpt, and mistral models (#4593)
Nam D. Tran
2023-12-26
cuda : fix vmm pool with multi GPU (#4620)
slaren
2023-12-24
llama : add PLaMo model (#3557)
Shintarou Okada
[next]