index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
common
/
common.h
Age
Commit message (
Expand
)
Author
2023-12-07
llama : per-layer KV cache + quantum K cache (#4309)
Georgi Gerganov
2023-12-05
llama : allow overriding GGUF metadata when loading model (#4092)
Kerfuffle
2023-12-05
sampling : custom samplers order (#4285)
MaggotHATE
2023-11-23
llama : KV cache view API + better KV cache management (#4170)
Georgi Gerganov
2023-11-20
main : Add ChatML functionality to main example (#4046)
Seb C
2023-11-16
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
Kerfuffle
2023-11-03
speculative : change default p_accept to 0.5 + CLI args (#3919)
Georgi Gerganov
2023-11-03
common : YAYF (yet another YARN fix) (#3925)
Georgi Gerganov
2023-11-02
build : link against build info instead of compiling against it (#3879)
cebtenzzre
2023-11-01
llama : implement YaRN RoPE scaling (#2268)
cebtenzzre
2023-11-01
common : allow caller to handle help/argument exceptions (#3715)
bandoti
2023-10-20
sampling : refactor init to use llama_sampling_params (#3696)
Georgi Gerganov
2023-10-18
speculative : add tree-based sampling example (#3624)
Georgi Gerganov
2023-10-17
tokenizer : special token handling (#3538)
staviq
2023-10-12
examples: support LLaVA v1.5 (multimodal model) (#3436)
M. Yusuf Sarıgöz
2023-10-11
common : fix mirostat state when using multiple sequences (#3543)
Kerfuffle
2023-10-06
parallel : add option to load external prompt file (#3416)
pudepiedj
2023-10-02
infill : add new example + extend server API (#3296)
vvhg1
2023-09-28
llama.cpp : split llama_context_params into model and context params (#3301)
slaren
2023-09-28
train : finetune LORA (#2632)
xaedes
2023-09-28
llama : custom attention mask + parallel decoding + no context swaps (#3228)
Georgi Gerganov
2023-09-23
examples : fix RoPE defaults to match PR #3240 (#3315)
Cebtenzzre
2023-09-18
make : restore build-info.h dependency for several targets (#3205)
Cebtenzzre
2023-09-15
examples : add compiler version and target to build info (#2998)
Cebtenzzre
2023-09-15
common : do not use GNU zero-length __VA_ARGS__ extension (#3195)
Cebtenzzre
2023-09-15
llama : remove mtest (#3177)
Roland
2023-09-13
speculative: add --n-gpu-layers-draft option (#3063)
FK
2023-09-07
fix some warnings from gcc and clang-tidy (#3038)
Cebtenzzre
2023-09-04
build : on Mac OS enable Metal by default (#2901)
Georgi Gerganov
2023-09-03
speculative : PoC for speeding-up inference via speculative sampling (#2926)
Georgi Gerganov
2023-08-30
main : log file (#2748)
staviq
2023-08-28
YAML result logging + preset script (#2657)
Johannes Gäßler
2023-08-27
llama : more tokenizer fixes (#2810)
Georgi Gerganov
2023-08-25
llama : add llama_beam_search() (#2267)
Matt Pulver
2023-08-23
llm : add Falcon support (#2717)
Georgi Gerganov
2023-08-23
Strided perplexity (#2714)
Kawrakow
2023-08-22
CUDA: use mul_mat_q kernels by default (#2683)
Johannes Gäßler
2023-08-21
gguf : new file format with flexible meta data (beta) (#2398)
Georgi Gerganov