summaryrefslogtreecommitdiff
path: root/llama.cpp
AgeCommit message (Expand)Author
2024-01-08SOTA 2-bit quants (#4773)Kawrakow
2024-01-08examples : add passkey test (#3856)Georgi Gerganov
2024-01-07llama : remove unused vars (#4796)Georgi Gerganov
2024-01-07llama : remove redundant GQA check (#4796)Georgi Gerganov
2024-01-07llama : print tensor meta for debuggingGeorgi Gerganov
2024-01-02llama : llama_model_desc print number of expertsGeorgi Gerganov
2024-01-02llama : replace all API facing `int`'s with `int32_t` (#4577)Marcus Dunn
2024-01-02llama : differentiate the KV dims in the attention (#4657)postmasters
2023-12-30ggml : add ggml_cpu_has_avx_vnni() (#4589)automaticcat
2023-12-28gpt2 : Add gpt2 architecture integration (#4555)manikbhandari
2023-12-27llama : add AWQ for llama, llama2, mpt, and mistral models (#4593)Nam D. Tran
2023-12-26cuda : fix vmm pool with multi GPU (#4620)slaren
2023-12-24llama : add PLaMo model (#3557)Shintarou Okada
2023-12-24cuda : improve cuda pool efficiency using virtual memory (#4606)slaren
2023-12-23fallback to CPU buffer if host buffer alloc fails (#4610)slaren
2023-12-22llama : fix platforms without mmap (#4578)slaren
2023-12-22llama : add ability to cancel model loading (#4462)crasm
2023-12-21ggml : change ggml_scale to take a float instead of tensor (#4573)Georgi Gerganov
2023-12-21llama : initial ggml-backend integration (#4520)slaren
2023-12-21llama : allow getting n_batch from llama_context in c api (#4540)Marcus Dunn
2023-12-21llama : disable per-tensor info prints on model load (#4562)Johannes Gäßler
2023-12-18llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)Ebey Abraham
2023-12-18llama : fix try_override for bool_value which always return true (#4519)hankcs
2023-12-17decode : fix logits_valid for legacy API (#4516)Jared Van Bortel
2023-12-17llama.swiftui : add bench functionality (#4483)Georgi Gerganov
2023-12-16lora : add support for non-llama models (#3333)slaren
2023-12-15llama : sanity checks for access to logits (#4274)Jared Van Bortel
2023-12-14ggml : remove n_dims from ggml_tensor (#4469)slaren
2023-12-14ggml : add ggml_row_size() (fixes llama out of space) (#4461)LostRuins
2023-12-13llama : add Mixtral support (#4406)slaren
2023-12-12english : use `typos` to fix comments and logs (#4354)Richard Kiss
2023-12-09grammar : revert the replacement of llama_token_to_piece with id_to_token (#4...Xiang (Kevin) Li
2023-12-07llama : per-layer KV cache + quantum K cache (#4309)Georgi Gerganov
2023-12-05grammar : pre-computed pieces + reserve mem + less string copies (#4330)Marcus Dunn
2023-12-05llama : allow overriding GGUF metadata when loading model (#4092)Kerfuffle
2023-12-03llama : pad KV cache size (#4280)Georgi Gerganov
2023-12-01llama : avoid using "optional" keyword (#4283)Georgi Gerganov
2023-12-01llama : support optional tensors (#4283)Georgi Gerganov
2023-12-01llama : support attention bias on LLaMA architecture (#4283)CausalLM
2023-12-01llama : add Qwen support (#4281)Shijie
2023-12-01llama : fix integer overflow during quantization (#4284)Georgi Gerganov
2023-12-01ggml : add ggml_soft_max_ext (#4256)Georgi Gerganov
2023-12-01build : fix build info generation and cleanup Makefile (#3920)Jared Van Bortel
2023-11-30llama : fix alignment of general.name in print meta (#4254)Daniel Bevenius
2023-11-30llama : fix typical sampling (#4261)tarcey
2023-11-28ggml : re-enable BLAS for CPU when src0 != F32 + remove redundant full offloa...Georgi Gerganov
2023-11-25llama : grammar `reserve` space in `decode_utf8` (#4210)Marcus Dunn
2023-11-24llama : set metal log callback correctly (#4204)slaren
2023-11-24ggml-cuda : support stablelm rope (#4156)slaren
2023-11-23llama : KV cache view API + better KV cache management (#4170)Georgi Gerganov