index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
llama.cpp
Age
Commit message (
Expand
)
Author
2023-12-28
gpt2 : Add gpt2 architecture integration (#4555)
manikbhandari
2023-12-27
llama : add AWQ for llama, llama2, mpt, and mistral models (#4593)
Nam D. Tran
2023-12-26
cuda : fix vmm pool with multi GPU (#4620)
slaren
2023-12-24
llama : add PLaMo model (#3557)
Shintarou Okada
2023-12-24
cuda : improve cuda pool efficiency using virtual memory (#4606)
slaren
2023-12-23
fallback to CPU buffer if host buffer alloc fails (#4610)
slaren
2023-12-22
llama : fix platforms without mmap (#4578)
slaren
2023-12-22
llama : add ability to cancel model loading (#4462)
crasm
2023-12-21
ggml : change ggml_scale to take a float instead of tensor (#4573)
Georgi Gerganov
2023-12-21
llama : initial ggml-backend integration (#4520)
slaren
2023-12-21
llama : allow getting n_batch from llama_context in c api (#4540)
Marcus Dunn
2023-12-21
llama : disable per-tensor info prints on model load (#4562)
Johannes Gäßler
2023-12-18
llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)
Ebey Abraham
2023-12-18
llama : fix try_override for bool_value which always return true (#4519)
hankcs
2023-12-17
decode : fix logits_valid for legacy API (#4516)
Jared Van Bortel
2023-12-17
llama.swiftui : add bench functionality (#4483)
Georgi Gerganov
2023-12-16
lora : add support for non-llama models (#3333)
slaren
2023-12-15
llama : sanity checks for access to logits (#4274)
Jared Van Bortel
2023-12-14
ggml : remove n_dims from ggml_tensor (#4469)
slaren
2023-12-14
ggml : add ggml_row_size() (fixes llama out of space) (#4461)
LostRuins
2023-12-13
llama : add Mixtral support (#4406)
slaren
2023-12-12
english : use `typos` to fix comments and logs (#4354)
Richard Kiss
2023-12-09
grammar : revert the replacement of llama_token_to_piece with id_to_token (#4...
Xiang (Kevin) Li
2023-12-07
llama : per-layer KV cache + quantum K cache (#4309)
Georgi Gerganov
2023-12-05
grammar : pre-computed pieces + reserve mem + less string copies (#4330)
Marcus Dunn
2023-12-05
llama : allow overriding GGUF metadata when loading model (#4092)
Kerfuffle
2023-12-03
llama : pad KV cache size (#4280)
Georgi Gerganov
2023-12-01
llama : avoid using "optional" keyword (#4283)
Georgi Gerganov
2023-12-01
llama : support optional tensors (#4283)
Georgi Gerganov
2023-12-01
llama : support attention bias on LLaMA architecture (#4283)
CausalLM
2023-12-01
llama : add Qwen support (#4281)
Shijie
2023-12-01
llama : fix integer overflow during quantization (#4284)
Georgi Gerganov
2023-12-01
ggml : add ggml_soft_max_ext (#4256)
Georgi Gerganov
2023-12-01
build : fix build info generation and cleanup Makefile (#3920)
Jared Van Bortel
2023-11-30
llama : fix alignment of general.name in print meta (#4254)
Daniel Bevenius
2023-11-30
llama : fix typical sampling (#4261)
tarcey
2023-11-28
ggml : re-enable BLAS for CPU when src0 != F32 + remove redundant full offloa...
Georgi Gerganov
2023-11-25
llama : grammar `reserve` space in `decode_utf8` (#4210)
Marcus Dunn
2023-11-24
llama : set metal log callback correctly (#4204)
slaren
2023-11-24
ggml-cuda : support stablelm rope (#4156)
slaren
2023-11-23
llama : KV cache view API + better KV cache management (#4170)
Georgi Gerganov
2023-11-21
stablelm : simplify + speedup generation (#4153)
Galunid
2023-11-19
gguf-py : export chat templates (#4125)
slaren
2023-11-17
llama : increase max nodes (#4115)
slaren
2023-11-17
llama : add functions to get the model's metadata (#4013)
slaren
2023-11-17
llama : fix data units (#4101)
Georgi Gerganov
2023-11-16
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
Kerfuffle
2023-11-15
llama : restore prefix space in llama tokenizer (#4081)
Jared Van Bortel
2023-11-14
stablelm : StableLM support (#3586)
Galunid
2023-11-13
sync : ggml (backend v2) (#3912)
Georgi Gerganov
[next]