index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
parallel
/
parallel.cpp
Age
Commit message (
Expand
)
Author
2023-11-23
llama : KV cache view API + better KV cache management (#4170)
Georgi Gerganov
2023-11-23
examples : fix typo in parallel example doc comment (#4181)
Daniel Bevenius
2023-11-02
build : link against build info instead of compiling against it (#3879)
cebtenzzre
2023-10-23
llama : remove token functions with `context` args in favor of `model` (#3720)
Marcus Dunn
2023-10-20
sampling : refactor init to use llama_sampling_params (#3696)
Georgi Gerganov
2023-10-18
speculative : add tree-based sampling example (#3624)
Georgi Gerganov
2023-10-11
common : fix mirostat state when using multiple sequences (#3543)
Kerfuffle
2023-10-09
refact : fix convert script + zero out KV cache to avoid nans (#3523)
Georgi Gerganov
2023-10-06
parallel : add option to load external prompt file (#3416)
pudepiedj
2023-10-03
llama : fix session saving/loading (#3400)
Georgi Gerganov
2023-09-28
llama.cpp : split llama_context_params into model and context params (#3301)
slaren
2023-09-28
llama : custom attention mask + parallel decoding + no context swaps (#3228)
Georgi Gerganov