index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
main
/
main.cpp
Age
Commit message (
Expand
)
Author
2023-09-07
fix some warnings from gcc and clang-tidy (#3038)
Cebtenzzre
2023-09-04
build : on Mac OS enable Metal by default (#2901)
Georgi Gerganov
2023-09-03
speculative : PoC for speeding-up inference via speculative sampling (#2926)
Georgi Gerganov
2023-09-03
perplexity : fix ETA by warming up the model with an empty run
Georgi Gerganov
2023-08-30
main : log file (#2748)
staviq
2023-08-28
YAML result logging + preset script (#2657)
Johannes Gäßler
2023-08-27
llama : more tokenizer fixes (#2810)
Georgi Gerganov
2023-08-26
main : fix bug (penalize_nl=false doesn't work) + suppress warning on mingw (...
Dr. Tom Murphy VII Ph.D
2023-08-26
Fix spm whitespaces (#2806)
klosax
2023-08-24
Fix for main example getting stuck when -n -2 and --interactive (#2767)
Kerfuffle
2023-08-23
llm : add Falcon support (#2717)
Georgi Gerganov
2023-08-23
main : insert bos if no tokens (#2727)
klosax
2023-08-21
gguf : new file format with flexible meta data (beta) (#2398)
Georgi Gerganov
2023-08-10
Add --n-predict -2 for stopping generation on full context (#2565)
Christian Demsar
2023-08-04
Add --simple-io option for subprocesses and break out console.h and cpp (#1558)
DannyDaemonic
2023-07-25
main : add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS (#2304)
Xiao-Yong Jin
2023-07-23
llama : add grammar-based sampling (#1773)
Evan Jones
2023-07-23
llama : grouped-query attention + LLaMAv2 70B support (#2276)
Georgi Gerganov
2023-07-22
llama : optimize memory buffers (#2325)
Georgi Gerganov
2023-07-21
llama : remove cfg smooth factor as it is only a reparameterization of the gu...
Guillaume "Vermeille" Sanchez
2023-07-15
llama : add custom RoPE (#2054)
Xiao-Yong Jin
2023-07-11
llama : add classifier-free guidance (#2135)
Bach Le
2023-07-10
mpi : add support for distributed inference via MPI (#2099)
Evan Miller
2023-07-06
convert : update for baichuan (#2081)
Judd
2023-06-29
Use unsigned for random seed (#2006)
Howard Su
2023-06-26
ggml : add NUMA support (#1556)
zrm
2023-06-24
llama : make model stateless and context stateful (llama_state) (#1797)
Didzis Gosko
2023-06-17
minor : warning fixes
Georgi Gerganov
2023-06-16
Fixed possible macro redefinition (#1892)
FrankHB
2023-06-16
build : fix and ignore MSVC warnings (#1889)
Borislav Stanimirov
2023-06-13
llama : do a warm-up eval at start for better timings (#1824)
Georgi Gerganov
2023-06-11
Fix issue where interactive mode crashes when input exceeds ctx size (#1789)
Kerfuffle
2023-06-06
main: add the possibility to open the prompt cache read-only (#1640)
Willy Tarreau
2023-06-04
llama : Metal inference (#1642)
Georgi Gerganov
2023-06-03
Fix prompt cache saving and chat-persistent rollover (#1678)
Evan Jones
2023-05-29
Work around for recalculating logits in cached prompts (Fixes #1585) (#1609)
DannyDaemonic
2023-05-25
Some improvements to loading the session with --prompt-cache (#1550)
Kerfuffle
2023-05-20
llama : add llama_init_backend() API (close #1527)
Georgi Gerganov
2023-05-19
main : make reverse prompt option act as a stop token in non-interactive mode...
Jason McCartney
2023-05-18
Fixes #1511 lambda issue for w64devkit (mingw) (#1513)
DannyDaemonic
2023-05-16
define default model path once, sync path with readme (#1366)
András Salamon
2023-05-12
llama : fix --mtest option (close #1414)
Georgi Gerganov
2023-05-10
main : add option to save full output to session (#1338)
Evan Jones
2023-05-08
Interface improvements and `--multiline-input` (previously `--author-mode`) (...
DannyDaemonic
2023-05-08
llama : require first token to be BOS (#1303)
Georgi Gerganov
2023-05-06
Remove default arguments from sampling functions (#1343)
Jed Fox
2023-05-04
main : add --in-suffix option (#1318)
44670
2023-05-04
fix #1224 reverse prompt and multi line (#1297)
Tomas
2023-05-02
Handle signals properly on Windows (#1123)
DannyDaemonic
2023-05-02
examples : add llama_init_from_gpt_params() common function (#1290)
Ron Evans
[next]