index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
server
/
server.cpp
Age
Commit message (
Expand
)
Author
2025-06-19
add dry sampler (#513)
firecoperana
2025-06-17
Send [DONE] for OAI compatibility (#470)
Kawrakow
2025-06-12
Add top n sigma sampler and other webui fix (#512)
firecoperana
2025-06-08
Fix non rpc build error (#506)
firecoperana
2025-06-08
Revert "Rpc improvement (#480)"
Iwan Kawrakow
2025-06-08
Rpc improvement (#480)
firecoperana
2025-06-08
Webui improvement (#481)
firecoperana
2025-06-07
Add an endpoint that lists all the saved prompt caches to server (#502)
saood06
2025-05-28
set cache_prompt default to true (#465)
saood06
2024-08-12
Merge mainline - Aug 12 2024 (#17)
Kawrakow
2024-07-27
Merge mainline llama.cpp (#3)
Kawrakow
2024-06-20
server : fix smart slot selection (#8020)
sasha0552
2024-06-18
Only use FIM middle token if it exists (#7648)
Sigbjørn Skjæret
2024-06-12
server : restore numeric prompts (#7883)
Georgi Gerganov
2024-06-10
server : improve "prompt" handling (#7847)
Georgi Gerganov
2024-06-08
server : smart slot selection using Longest Common Prefix (#7728)
sasha0552
2024-06-07
server : do not get prompt in infill mode (#7286)
woodx
2024-06-06
imatrix : migrate to gpt_params (#7771)
Georgi Gerganov
2024-06-04
common : refactor cli arg parsing (#7675)
Georgi Gerganov
2024-06-01
server : new UI (#7633)
Yazan Agha-Schrader
2024-05-22
common : normalize naming style (#7462)
Georgi Gerganov
2024-05-20
server : return error on too large embedding input (#7389)
Georgi Gerganov
2024-05-19
server: fix seed being reported back (#7382)
Johannes Gäßler
2024-05-17
server : add support for the RPC backend (#7305)
Radoslav Gerganov
2024-05-14
server: free sampling contexts on exit (#7264)
Steve Grubb
2024-05-11
fix system prompt handling (#7153)
Xuan Son Nguyen
2024-05-11
server : free llama_batch on exit (#7212)
Steve Grubb
2024-05-11
server: fix reported top tokens for temperature 0 (#7203)
Johannes Gäßler
2024-05-08
JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143)
Johannes Gäßler
2024-05-08
server : add_special option for tokenize endpoint (#7059)
Johan
2024-05-07
server: fix incorrectly reported token probabilities (#7125)
Johannes Gäßler
2024-05-04
If first token generated from the server is the stop word the server will cra...
maor-ps
2024-04-30
ggml : add Flash Attention (#5021)
Georgi Gerganov
2024-04-30
Improve usability of --model-url & related flags (#6930)
Olivier Chafik
2024-04-27
ci: server: tests python env on github container ubuntu latest / fix n_predic...
Pierrick Hymbert
2024-04-26
quantize: add imatrix and dataset metadata in GGUF (#6658)
Pierrick Hymbert
2024-04-26
server: stop generation at `n_ctx_train` if `n_predict` is not set (#6638)
Pierrick Hymbert
2024-04-24
common : revert showing control tokens by default for server (#6860)
Kyle Mistele
2024-04-24
Server: fix seed for multiple slots (#6835)
Johannes Gäßler
2024-04-21
llama : support Llama 3 HF conversion (#6745)
Pedro Cuenca
2024-04-12
JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings,...
Olivier Chafik
2024-04-12
server : coherent log output for KV cache full (#6637)
Pierrick Hymbert
2024-04-09
BERT tokenizer fixes (#6498)
Jared Van Bortel
2024-04-08
llama : save and restore kv cache for single seq id (#6341)
Jan Boon
2024-04-04
server : remove obsolete --memory-f32 option
Georgi Gerganov
2024-04-04
server : add option to disable KV offload (#6468)
Xiao-Yong Jin
2024-03-28
server : stop gracefully on SIGTERM (#6348)
Eric Zhang
2024-03-26
llama : greatly reduce output buffer memory usage (#6122)
compilade
2024-03-26
server : add `n_discard` parameter (#6300)
Jan Boon
2024-03-26
cuda : rename build flag to LLAMA_CUDA (#6299)
slaren
[next]