summaryrefslogtreecommitdiff
path: root/examples/server/server.cpp
AgeCommit message (Expand)Author
2024-01-18server : defer tasks when "slot unavailable" (#5018)Xuan Son Nguyen
2024-01-13server : fix prompt caching with system prompt (#4914)Georgi Gerganov
2024-01-13server : fix deadlock that occurs in multi-prompt scenarios (#4905)Ziad Ben Hadj-Alouane
2024-01-13server : fix crash with multimodal models without BOS token (#4904)makomk
2024-01-12llama : ggml-backend integration (#4766)slaren
2024-01-11server : fix infill when prompt is empty (#4833)Georgi Gerganov
2024-01-11server : implement credentialed CORS (#4514)Laura
2024-01-11server : support for multiple api keys (#4864)Michael Coppola
2024-01-11server : add `LOG_INFO` when model is successfully loaded (#4881)Behnam M
2024-01-11server : fix typo in model name (#4876)Isaac McFadyen
2024-01-11server : fix build + rename enums (#4870)Georgi Gerganov
2024-01-10server : add a `/health` endpoint (#4860)Behnam M
2024-01-07server : fix n_predict check (#4798)Georgi Gerganov
2024-01-04server : send token probs for "stream == false" (#4714)Georgi Gerganov
2024-01-02editorconfig : fix whitespace and indentation #4710Georgi Gerganov
2024-01-02server : add --override-kv parameter (#4710)minarchist
2023-12-30clip : refactor + bug fixes (#4696)Georgi Gerganov
2023-12-29server : replace sleep with condition variables (#4673)Justine Tunney
2023-12-29server : fix OpenAI server sampling w.r.t. penalty. (#4675)SakuraUmi
2023-12-29server : allow to generate multimodal embeddings (#4681)Karthik Sethuraman
2023-12-28Fix OpenAI server sampling w.r.t. temp and seed (#4668)Justine Tunney
2023-12-23server : allow to specify custom prompt for penalty calculation (#3727)Alexey Parfenov
2023-12-17server : disable llm logs if SERVER_VERBOSE is off (#3792)olexiyb
2023-12-17server : fix grammar being ignored (#4494)AdithyanI
2023-12-17server : fix possible ambiguity in content type charset (#4501)Alexey Parfenov
2023-12-17server : allow requests larger than 8K (#4500)mzcu
2023-12-15server : add optional API Key Authentication example (#4441)ShadovvBeast
2023-12-13server : fix handling of characters that span multiple tokens when streaming ...shibe2
2023-12-12server : fix local model name in server (#4420)Vladimir Zorin
2023-12-07llama : per-layer KV cache + quantum K cache (#4309)Georgi Gerganov
2023-12-06server : recognize cache_prompt parameter in OAI API (#4347)Georgi Gerganov
2023-12-03server : fix OpenAI API `stop` field to be optional (#4299)Ed Lee
2023-12-01llama : support optional tensors (#4283)Georgi Gerganov
2023-12-01server : add --log-disable to disable logging to file (#4260)Ziad Ben Hadj-Alouane
2023-12-01server : add single-client multi-prompt support (#4232)Ziad Ben Hadj-Alouane
2023-11-25server : OAI API compatibility (#4198)Georgi Gerganov
2023-11-23Fix incorrect format strings and uninitialized variables. (#4133)Haohui Mai
2023-11-19server : relay error messages (#4131)SoftwareRenderer
2023-11-16Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)Kerfuffle
2023-11-10server : fix crash when prompt exceeds context size (#3996)Alexey Parfenov
2023-11-08server : add min_p param (#3877)Mihai
2023-11-02build : link against build info instead of compiling against it (#3879)cebtenzzre
2023-11-01llama : implement YaRN RoPE scaling (#2268)cebtenzzre
2023-11-01server : re-enable completion and embedded at the same time (#3876)Adrian Hesketh
2023-10-29Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)Kerfuffle
2023-10-26server : do not release slot on image input (#3798)Georgi Gerganov
2023-10-24server : add parameter -tb N, --threads-batch N (#3584) (#3768)cebtenzzre
2023-10-24server : do not block system prompt update (#3767)Georgi Gerganov
2023-10-23llama : remove token functions with `context` args in favor of `model` (#3720)Marcus Dunn
2023-10-22server : parallel decoding and multimodal (#3677)Georgi Gerganov