summaryrefslogtreecommitdiff
path: root/examples/server
AgeCommit message (Expand)Author
2024-04-15`main`: add --json-schema / -j flag (#6659)Olivier Chafik
2024-04-15server : revert "minor layout improvements" (#6684)Pierrick Hymbert
2024-04-12JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings,...Olivier Chafik
2024-04-12server : coherent log output for KV cache full (#6637)Pierrick Hymbert
2024-04-10minor layout improvements (#6572)Ralph Soika
2024-04-09BERT tokenizer fixes (#6498)Jared Van Bortel
2024-04-09server : detect search query to start webchat (#6554)Ed Lee
2024-04-08llama : save and restore kv cache for single seq id (#6341)Jan Boon
2024-04-06ci: bench: support sse and fix prompt processing time / server: add tokens us...Pierrick Hymbert
2024-04-04server: allow penalizing repetition of newlines on server webpage (#6431)Shakhar Dasgupta
2024-04-04ci: bench: add more ftype, fix triggers and bot comment (#6466)Pierrick Hymbert
2024-04-04server : remove obsolete --memory-f32 optionGeorgi Gerganov
2024-04-04server : add option to disable KV offload (#6468)Xiao-Yong Jin
2024-04-03A few small fixes to server's README docs (#6428)Fattire
2024-04-03server : handle exception on wrong type in request (#6452)JH23X
2024-03-28server : stop gracefully on SIGTERM (#6348)Eric Zhang
2024-03-27server: continuous performance monitoring and PR comment (#6283)Pierrick Hymbert
2024-03-27server: public: use relative routes for static files (#6325)Eric Zhang
2024-03-26llama : greatly reduce output buffer memory usage (#6122)compilade
2024-03-26server : add `n_discard` parameter (#6300)Jan Boon
2024-03-26cuda : rename build flag to LLAMA_CUDA (#6299)slaren
2024-03-25Server: clean up OAI params parsing function (#6284)Xuan Son Nguyen
2024-03-23common: llama_load_model_from_url split support (#6192)Pierrick Hymbert
2024-03-23server: docs: `--threads` and `--threads`, `--ubatch-size`, `--log-disable` (...Pierrick Hymbert
2024-03-23server: flush stdout after logging in both text and json layout (#6253)Pierrick Hymbert
2024-03-22json-schema-to-grammar : fix order of props + non-str const/enum (#6232)Olivier Chafik
2024-03-22server : fix n_keep always showing as 0 in response (#6211)Jan Boon
2024-03-22server : enable continuous batching by default (#6231)Georgi Gerganov
2024-03-21server : update readme doc from `slot_id` to `id_slot` (#6213)Jan Boon
2024-03-21json-schema-to-grammar improvements (+ added to server) (#5978)Olivier Chafik
2024-03-20Server: version bump for httplib and json (#6169)Xuan Son Nguyen
2024-03-20server : allow to override -ngl in tests (#6170)Georgi Gerganov
2024-03-20Server: Handle n_keep parameter in the request (#6174)Karthick
2024-03-20server tests : more pythonic process management; fix bare `except:` (#6146)Jared Van Bortel
2024-03-17common: llama_load_model_from_url using --model-url (#6098)Pierrick Hymbert
2024-03-14server: disable debug release type sanitizer, simplify trigger (#6047)Pierrick Hymbert
2024-03-13llama : add pipeline parallelism support (#6017)slaren
2024-03-13Server: Use multi-task for embeddings endpoint (#6001)Xuan Son Nguyen
2024-03-11Update server docker image URLs (#5997)Jakub N
2024-03-11Server: format error to json (#5961)Xuan Son Nguyen
2024-03-11server : maintain chat completion id for streaming responses (#5988)Minsoo Cheong
2024-03-10server: ci: windows build and tests (#5968)Pierrick Hymbert
2024-03-09server: benchmark: chat/completions scenario and other llm servers comparison...Pierrick Hymbert
2024-03-09server : print chat template infoGeorgi Gerganov
2024-03-09server : fix metrics init (#5964)Georgi Gerganov
2024-03-09server : clarify some items in the readme (#5957)Georgi Gerganov
2024-03-09server : normalize embeddings (#5956)SeungWon Jeong
2024-03-09server : fix passing prompt as tokens (#5955)Alexey Parfenov
2024-03-09server : simplify logic for empty prompts (#5953)Georgi Gerganov
2024-03-09Server: reorganize some http logic (#5939)Xuan Son Nguyen