index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
infill
/
infill.cpp
Age
Commit message (
Expand
)
Author
2024-08-12
Merge mainline - Aug 12 2024 (#17)
Kawrakow
2024-07-27
Merge mainline llama.cpp (#3)
Kawrakow
2024-06-18
Only use FIM middle token if it exists (#7648)
Sigbjørn Skjæret
2024-06-04
common : refactor cli arg parsing (#7675)
Georgi Gerganov
2024-05-22
common : normalize naming style (#7462)
Georgi Gerganov
2024-04-21
llama : support Llama 3 HF conversion (#6745)
Pedro Cuenca
2024-04-09
BERT tokenizer fixes (#6498)
Jared Van Bortel
2024-03-02
convert : automatically fall back to HfVocab if tokenizer.model doesn't exist...
Jared Van Bortel
2024-02-25
llama : refactor k-shift implementation + KV defragmentation (#5691)
Georgi Gerganov
2024-02-16
ggml : add numa options (#5377)
bmwl
2024-01-27
Remove unused data and add fixes (#5154)
Michael Klimenko
2023-11-20
main : Add ChatML functionality to main example (#4046)
Seb C
2023-11-16
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
Kerfuffle
2023-11-02
build : link against build info instead of compiling against it (#3879)
cebtenzzre
2023-10-23
llama : remove token functions with `context` args in favor of `model` (#3720)
Marcus Dunn
2023-10-20
sampling : refactor init to use llama_sampling_params (#3696)
Georgi Gerganov
2023-10-18
speculative : add tree-based sampling example (#3624)
Georgi Gerganov
2023-10-11
common : fix mirostat state when using multiple sequences (#3543)
Kerfuffle
2023-10-10
infill. : fix tokenization (#3508)
vvhg1
2023-10-02
infill : add new example + extend server API (#3296)
vvhg1