summaryrefslogtreecommitdiff
path: root/examples/server/server.cpp
AgeCommit message (Expand)Author
2024-07-27Merge mainline llama.cpp (#3)Kawrakow
2024-06-20server : fix smart slot selection (#8020)sasha0552
2024-06-18Only use FIM middle token if it exists (#7648)Sigbjørn Skjæret
2024-06-12server : restore numeric prompts (#7883)Georgi Gerganov
2024-06-10server : improve "prompt" handling (#7847)Georgi Gerganov
2024-06-08server : smart slot selection using Longest Common Prefix (#7728)sasha0552
2024-06-07server : do not get prompt in infill mode (#7286)woodx
2024-06-06imatrix : migrate to gpt_params (#7771)Georgi Gerganov
2024-06-04common : refactor cli arg parsing (#7675)Georgi Gerganov
2024-06-01server : new UI (#7633)Yazan Agha-Schrader
2024-05-22common : normalize naming style (#7462)Georgi Gerganov
2024-05-20server : return error on too large embedding input (#7389)Georgi Gerganov
2024-05-19server: fix seed being reported back (#7382)Johannes Gäßler
2024-05-17server : add support for the RPC backend (#7305)Radoslav Gerganov
2024-05-14server: free sampling contexts on exit (#7264)Steve Grubb
2024-05-11fix system prompt handling (#7153)Xuan Son Nguyen
2024-05-11server : free llama_batch on exit (#7212)Steve Grubb
2024-05-11server: fix reported top tokens for temperature 0 (#7203)Johannes Gäßler
2024-05-08JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143)Johannes Gäßler
2024-05-08server : add_special option for tokenize endpoint (#7059)Johan
2024-05-07server: fix incorrectly reported token probabilities (#7125)Johannes Gäßler
2024-05-04If first token generated from the server is the stop word the server will cra...maor-ps
2024-04-30ggml : add Flash Attention (#5021)Georgi Gerganov
2024-04-30Improve usability of --model-url & related flags (#6930)Olivier Chafik
2024-04-27ci: server: tests python env on github container ubuntu latest / fix n_predic...Pierrick Hymbert
2024-04-26quantize: add imatrix and dataset metadata in GGUF (#6658)Pierrick Hymbert
2024-04-26server: stop generation at `n_ctx_train` if `n_predict` is not set (#6638)Pierrick Hymbert
2024-04-24common : revert showing control tokens by default for server (#6860)Kyle Mistele
2024-04-24Server: fix seed for multiple slots (#6835)Johannes Gäßler
2024-04-21llama : support Llama 3 HF conversion (#6745)Pedro Cuenca
2024-04-12JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings,...Olivier Chafik
2024-04-12server : coherent log output for KV cache full (#6637)Pierrick Hymbert
2024-04-09BERT tokenizer fixes (#6498)Jared Van Bortel
2024-04-08llama : save and restore kv cache for single seq id (#6341)Jan Boon
2024-04-04server : remove obsolete --memory-f32 optionGeorgi Gerganov
2024-04-04server : add option to disable KV offload (#6468)Xiao-Yong Jin
2024-03-28server : stop gracefully on SIGTERM (#6348)Eric Zhang
2024-03-26llama : greatly reduce output buffer memory usage (#6122)compilade
2024-03-26server : add `n_discard` parameter (#6300)Jan Boon
2024-03-26cuda : rename build flag to LLAMA_CUDA (#6299)slaren
2024-03-25Server: clean up OAI params parsing function (#6284)Xuan Son Nguyen
2024-03-23common: llama_load_model_from_url split support (#6192)Pierrick Hymbert
2024-03-22json-schema-to-grammar : fix order of props + non-str const/enum (#6232)Olivier Chafik
2024-03-22server : fix n_keep always showing as 0 in response (#6211)Jan Boon
2024-03-22server : enable continuous batching by default (#6231)Georgi Gerganov
2024-03-21json-schema-to-grammar improvements (+ added to server) (#5978)Olivier Chafik
2024-03-17common: llama_load_model_from_url using --model-url (#6098)Pierrick Hymbert
2024-03-13llama : add pipeline parallelism support (#6017)slaren
2024-03-13Server: Use multi-task for embeddings endpoint (#6001)Xuan Son Nguyen
2024-03-11Server: format error to json (#5961)Xuan Son Nguyen