summaryrefslogtreecommitdiff
path: root/examples/server/server.cpp
AgeCommit message (Expand)Author
2024-04-24common : revert showing control tokens by default for server (#6860)Kyle Mistele
2024-04-24Server: fix seed for multiple slots (#6835)Johannes Gäßler
2024-04-21llama : support Llama 3 HF conversion (#6745)Pedro Cuenca
2024-04-12JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings,...Olivier Chafik
2024-04-12server : coherent log output for KV cache full (#6637)Pierrick Hymbert
2024-04-09BERT tokenizer fixes (#6498)Jared Van Bortel
2024-04-08llama : save and restore kv cache for single seq id (#6341)Jan Boon
2024-04-04server : remove obsolete --memory-f32 optionGeorgi Gerganov
2024-04-04server : add option to disable KV offload (#6468)Xiao-Yong Jin
2024-03-28server : stop gracefully on SIGTERM (#6348)Eric Zhang
2024-03-26llama : greatly reduce output buffer memory usage (#6122)compilade
2024-03-26server : add `n_discard` parameter (#6300)Jan Boon
2024-03-26cuda : rename build flag to LLAMA_CUDA (#6299)slaren
2024-03-25Server: clean up OAI params parsing function (#6284)Xuan Son Nguyen
2024-03-23common: llama_load_model_from_url split support (#6192)Pierrick Hymbert
2024-03-22json-schema-to-grammar : fix order of props + non-str const/enum (#6232)Olivier Chafik
2024-03-22server : fix n_keep always showing as 0 in response (#6211)Jan Boon
2024-03-22server : enable continuous batching by default (#6231)Georgi Gerganov
2024-03-21json-schema-to-grammar improvements (+ added to server) (#5978)Olivier Chafik
2024-03-17common: llama_load_model_from_url using --model-url (#6098)Pierrick Hymbert
2024-03-13llama : add pipeline parallelism support (#6017)slaren
2024-03-13Server: Use multi-task for embeddings endpoint (#6001)Xuan Son Nguyen
2024-03-11Server: format error to json (#5961)Xuan Son Nguyen
2024-03-11server : maintain chat completion id for streaming responses (#5988)Minsoo Cheong
2024-03-09server: benchmark: chat/completions scenario and other llm servers comparison...Pierrick Hymbert
2024-03-09server : print chat template infoGeorgi Gerganov
2024-03-09server : fix metrics init (#5964)Georgi Gerganov
2024-03-09server : normalize embeddings (#5956)SeungWon Jeong
2024-03-09server : fix passing prompt as tokens (#5955)Alexey Parfenov
2024-03-09server : simplify logic for empty prompts (#5953)Georgi Gerganov
2024-03-09Server: reorganize some http logic (#5939)Xuan Son Nguyen
2024-03-09server : add SSL support (#5926)Gabe Goodhart
2024-03-09server: tests: add truncated prompt tests, better kv cache size (#5933)Pierrick Hymbert
2024-03-08llama : support Mamba Selective State Space Models (#5328)compilade
2024-03-08server: metrics: add llamacpp:prompt_seconds_total and llamacpp:tokens_predic...Pierrick Hymbert
2024-03-08server : fix EOS token detection with disabled cache (#5938)Georgi Gerganov
2024-03-07server : add `/v1/completions` endpoint (#5914)Minsoo Cheong
2024-03-07server : refactor (#5882)Georgi Gerganov
2024-03-04llama : fix embeddings (#5796)Georgi Gerganov
2024-03-04add alias for chat template (#5858)Xuan Son Nguyen
2024-03-03server : init http requests thread pool with --parallel if set (#5836)Pierrick Hymbert
2024-03-02server: tests: passkey challenge / self-extend with context shift demo (#5832)Pierrick Hymbert
2024-03-01llama : cleanup unused mmq flags (#5772)Pierrick Hymbert
2024-03-01server: allow to override threads server pool with --threads-http (#5794)Pierrick Hymbert
2024-03-01server : fix newlines in help (#5785)Georgi Gerganov
2024-02-29Server: normalize naming (#5779)Xuan Son Nguyen
2024-02-28server : hit Ctrl+C twice to exit (#5734)Xuan Son Nguyen
2024-02-28server : add "/chat/completions" alias for "/v1/...` (#5722)Jorge A
2024-02-26fix server hangs on empty prompt (#5733)Xuan Son Nguyen
2024-02-25llama : refactor k-shift implementation + KV defragmentation (#5691)Georgi Gerganov