summaryrefslogtreecommitdiff
path: root/examples/server/server.cpp
AgeCommit message (Expand)Author
2024-02-18common, server : surface min_keep as its own parameter (#5567)Robey Holderith
2024-02-18server : slots monitoring endpoint (#5550)Pierrick Hymbert
2024-02-18server : enhanced health endpoint (#5548)Pierrick Hymbert
2024-02-18server : --n-predict option document and cap to max value (#5549)Pierrick Hymbert
2024-02-18server : graceful server shutdown (#5244)Daniel Hiltgen
2024-02-16server : add "samplers" param to control the samplers order (#5494)Alexey Parfenov
2024-02-16server : fix system prompt cli (#5516)Rőczey Barnabás
2024-02-16ggml : add numa options (#5377)bmwl
2024-02-15llava : fix memory management bug (#5491)Elbios
2024-02-14llava : support v1.6 (#5267)John
2024-02-11server : allow to specify tokens as strings in logit_bias (#5003)Alexey Parfenov
2024-02-11server : add llama2 chat template (#5425)Xuan Son Nguyen
2024-02-09server : fix prompt caching for repeated prompts (#5420)Riley Stewart
2024-02-07server : update `/props` with "total_slots" value (#5373)Justin Parker
2024-02-06server : remove model.json endpoint (#5371)Alexey Parfenov
2024-02-06server : include total "num_slots" in props endpoint (#5349)Justin Parker
2024-02-06server : add `dynatemp_range` and `dynatemp_exponent` (#5352)Michael Coppola
2024-02-06server : various fixes for the prompt field in /completion (#5300)Niall Coates
2024-02-05server : allow to get default generation settings for completion (#5307)Alexey Parfenov
2024-02-03refactor : switch to emplace_back to avoid extra object (#5291)Michael Klimenko
2024-01-31llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240)Georgi Gerganov
2024-01-30server : fix context shift (#5195)Georgi Gerganov
2024-01-29server : embeddings compatibility for OpenAI (#5190)Wu Jian Ping
2024-01-28ggml : add unified SYCL backend for Intel GPUs (#2690)Abhilash Majumder
2024-01-27Remove unused data and add fixes (#5154)Michael Klimenko
2024-01-27server : add self-extend support (#5104)Maximilian Winter
2024-01-26server : refactored the task processing logic (#5065)Xuan Son Nguyen
2024-01-18server : defer tasks when "slot unavailable" (#5018)Xuan Son Nguyen
2024-01-13server : fix prompt caching with system prompt (#4914)Georgi Gerganov
2024-01-13server : fix deadlock that occurs in multi-prompt scenarios (#4905)Ziad Ben Hadj-Alouane
2024-01-13server : fix crash with multimodal models without BOS token (#4904)makomk
2024-01-12llama : ggml-backend integration (#4766)slaren
2024-01-11server : fix infill when prompt is empty (#4833)Georgi Gerganov
2024-01-11server : implement credentialed CORS (#4514)Laura
2024-01-11server : support for multiple api keys (#4864)Michael Coppola
2024-01-11server : add `LOG_INFO` when model is successfully loaded (#4881)Behnam M
2024-01-11server : fix typo in model name (#4876)Isaac McFadyen
2024-01-11server : fix build + rename enums (#4870)Georgi Gerganov
2024-01-10server : add a `/health` endpoint (#4860)Behnam M
2024-01-07server : fix n_predict check (#4798)Georgi Gerganov
2024-01-04server : send token probs for "stream == false" (#4714)Georgi Gerganov
2024-01-02editorconfig : fix whitespace and indentation #4710Georgi Gerganov
2024-01-02server : add --override-kv parameter (#4710)minarchist
2023-12-30clip : refactor + bug fixes (#4696)Georgi Gerganov
2023-12-29server : replace sleep with condition variables (#4673)Justine Tunney
2023-12-29server : fix OpenAI server sampling w.r.t. penalty. (#4675)SakuraUmi
2023-12-29server : allow to generate multimodal embeddings (#4681)Karthik Sethuraman
2023-12-28Fix OpenAI server sampling w.r.t. temp and seed (#4668)Justine Tunney
2023-12-23server : allow to specify custom prompt for penalty calculation (#3727)Alexey Parfenov
2023-12-17server : disable llm logs if SERVER_VERBOSE is off (#3792)olexiyb