summaryrefslogtreecommitdiff
path: root/examples/server/server.cpp
AgeCommit message (Expand)Author
2025-06-19add dry sampler (#513)firecoperana
2025-06-17Send [DONE] for OAI compatibility (#470)Kawrakow
2025-06-12Add top n sigma sampler and other webui fix (#512)firecoperana
2025-06-08Fix non rpc build error (#506)firecoperana
2025-06-08Revert "Rpc improvement (#480)"Iwan Kawrakow
2025-06-08Rpc improvement (#480)firecoperana
2025-06-08Webui improvement (#481)firecoperana
2025-06-07Add an endpoint that lists all the saved prompt caches to server (#502)saood06
2025-05-28set cache_prompt default to true (#465)saood06
2024-08-12Merge mainline - Aug 12 2024 (#17)Kawrakow
2024-07-27Merge mainline llama.cpp (#3)Kawrakow
2024-06-20server : fix smart slot selection (#8020)sasha0552
2024-06-18Only use FIM middle token if it exists (#7648)Sigbjørn Skjæret
2024-06-12server : restore numeric prompts (#7883)Georgi Gerganov
2024-06-10server : improve "prompt" handling (#7847)Georgi Gerganov
2024-06-08server : smart slot selection using Longest Common Prefix (#7728)sasha0552
2024-06-07server : do not get prompt in infill mode (#7286)woodx
2024-06-06imatrix : migrate to gpt_params (#7771)Georgi Gerganov
2024-06-04common : refactor cli arg parsing (#7675)Georgi Gerganov
2024-06-01server : new UI (#7633)Yazan Agha-Schrader
2024-05-22common : normalize naming style (#7462)Georgi Gerganov
2024-05-20server : return error on too large embedding input (#7389)Georgi Gerganov
2024-05-19server: fix seed being reported back (#7382)Johannes Gäßler
2024-05-17server : add support for the RPC backend (#7305)Radoslav Gerganov
2024-05-14server: free sampling contexts on exit (#7264)Steve Grubb
2024-05-11fix system prompt handling (#7153)Xuan Son Nguyen
2024-05-11server : free llama_batch on exit (#7212)Steve Grubb
2024-05-11server: fix reported top tokens for temperature 0 (#7203)Johannes Gäßler
2024-05-08JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143)Johannes Gäßler
2024-05-08server : add_special option for tokenize endpoint (#7059)Johan
2024-05-07server: fix incorrectly reported token probabilities (#7125)Johannes Gäßler
2024-05-04If first token generated from the server is the stop word the server will cra...maor-ps
2024-04-30ggml : add Flash Attention (#5021)Georgi Gerganov
2024-04-30Improve usability of --model-url & related flags (#6930)Olivier Chafik
2024-04-27ci: server: tests python env on github container ubuntu latest / fix n_predic...Pierrick Hymbert
2024-04-26quantize: add imatrix and dataset metadata in GGUF (#6658)Pierrick Hymbert
2024-04-26server: stop generation at `n_ctx_train` if `n_predict` is not set (#6638)Pierrick Hymbert
2024-04-24common : revert showing control tokens by default for server (#6860)Kyle Mistele
2024-04-24Server: fix seed for multiple slots (#6835)Johannes Gäßler
2024-04-21llama : support Llama 3 HF conversion (#6745)Pedro Cuenca
2024-04-12JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings,...Olivier Chafik
2024-04-12server : coherent log output for KV cache full (#6637)Pierrick Hymbert
2024-04-09BERT tokenizer fixes (#6498)Jared Van Bortel
2024-04-08llama : save and restore kv cache for single seq id (#6341)Jan Boon
2024-04-04server : remove obsolete --memory-f32 optionGeorgi Gerganov
2024-04-04server : add option to disable KV offload (#6468)Xiao-Yong Jin
2024-03-28server : stop gracefully on SIGTERM (#6348)Eric Zhang
2024-03-26llama : greatly reduce output buffer memory usage (#6122)compilade
2024-03-26server : add `n_discard` parameter (#6300)Jan Boon
2024-03-26cuda : rename build flag to LLAMA_CUDA (#6299)slaren