summaryrefslogtreecommitdiff
path: root/llama.cpp
AgeCommit message (Expand)Author
2024-01-19llama : fix falcon arch for tied output embeddings (#4978)John
2024-01-18llama : fix mlock with no-mmap with Metal (#5025)slaren
2024-01-17ggml : add IQ2 to test-backend-ops + refactoring (#4990)Georgi Gerganov
2024-01-17backend : add eval callback (#4935)Georgi Gerganov
2024-01-17llama : use Q4_K for attn_v for Q2_K_S when n_gqa >= 4 (#4996)Kawrakow
2024-01-16ggml : importance matrix support for legacy quants (#4969)Kawrakow
2024-01-15llama : apply classifier-free guidance to logits directly (#4951)David Friehs
2024-01-15llama : check for 256 divisibility for IQ2_XS, IQ2_XXS (#4950)Kawrakow
2024-01-14llama : fix missing quotes (#4937)David Pflug
2024-01-14llama : check LLAMA_TRACE env for extra logging (#4929)Georgi Gerganov
2024-01-14llama : use LLAMA_LOG_ macros for loggingGeorgi Gerganov
2024-01-14Fix ffn_down quantization mix for MoE models (#4927)Kawrakow
2024-01-14llama : support WinXP build with MinGW 8.1.0 (#3419)Karthik Kumar Viswanathan
2024-01-142-bit quantizations (#4897)Kawrakow
2024-01-14Make Q3_K_S be the same as olf Q3_K_L for Mixtral-8x7B (#4906)Kawrakow
2024-01-13metal : remove old API (#4919)Georgi Gerganov
2024-01-13llama : fix detokenization of non-special added-tokens (#4916)Georgi Gerganov
2024-01-13llama : minimize size used for state save/load (#4820)David Friehs
2024-01-13convert : update phi-2 to latest HF repo (#4903)Georgi Gerganov
2024-01-12llama : ggml-backend integration (#4766)slaren
2024-01-12llama : remove redundant assert for StableLM (#4901)Georgi Gerganov
2024-01-12llama : fix typo "imp_embd" -> "inp_embd"Georgi Gerganov
2024-01-12llama : fix llm_build_k_shift to use correct n_rot (#4889)Georgi Gerganov
2024-01-11llama : restore intended k-quants mixes for MoE models (#4872)Kawrakow
2024-01-11ggml : SOTA 2-bit quants (add IQ2_XS) (#4856)Kawrakow
2024-01-11main : print total token count and tokens consumed so far (#4874)pudepiedj
2024-01-10llama : add additional suffixes for model params (#4834)Brian
2024-01-10llama : recognize 1B phi models (#4847)Austin
2024-01-08SOTA 2-bit quants (#4773)Kawrakow
2024-01-08examples : add passkey test (#3856)Georgi Gerganov
2024-01-07llama : remove unused vars (#4796)Georgi Gerganov
2024-01-07llama : remove redundant GQA check (#4796)Georgi Gerganov
2024-01-07llama : print tensor meta for debuggingGeorgi Gerganov
2024-01-02llama : llama_model_desc print number of expertsGeorgi Gerganov
2024-01-02llama : replace all API facing `int`'s with `int32_t` (#4577)Marcus Dunn
2024-01-02llama : differentiate the KV dims in the attention (#4657)postmasters
2023-12-30ggml : add ggml_cpu_has_avx_vnni() (#4589)automaticcat
2023-12-28gpt2 : Add gpt2 architecture integration (#4555)manikbhandari
2023-12-27llama : add AWQ for llama, llama2, mpt, and mistral models (#4593)Nam D. Tran
2023-12-26cuda : fix vmm pool with multi GPU (#4620)slaren
2023-12-24llama : add PLaMo model (#3557)Shintarou Okada
2023-12-24cuda : improve cuda pool efficiency using virtual memory (#4606)slaren
2023-12-23fallback to CPU buffer if host buffer alloc fails (#4610)slaren
2023-12-22llama : fix platforms without mmap (#4578)slaren
2023-12-22llama : add ability to cancel model loading (#4462)crasm
2023-12-21ggml : change ggml_scale to take a float instead of tensor (#4573)Georgi Gerganov
2023-12-21llama : initial ggml-backend integration (#4520)slaren
2023-12-21llama : allow getting n_batch from llama_context in c api (#4540)Marcus Dunn
2023-12-21llama : disable per-tensor info prints on model load (#4562)Johannes Gäßler
2023-12-18llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)Ebey Abraham