summaryrefslogtreecommitdiff
path: root/examples/server/README.md
AgeCommit message (Expand)Author
2024-02-07server : update `/props` with "total_slots" value (#5373)Justin Parker
2024-02-06server : add `dynatemp_range` and `dynatemp_exponent` (#5352)Michael Coppola
2024-02-05server : allow to get default generation settings for completion (#5307)Alexey Parfenov
2024-01-30server : improve README (#5209)Wu Jian Ping
2024-01-28docker : add server-first container images (#5157)Kyle Mistele
2024-01-27server : add self-extend support (#5104)Maximilian Winter
2024-01-11server : support for multiple api keys (#4864)Michael Coppola
2024-01-11server : update readme to document the new `/health` endpoint (#4866)Behnam M
2024-01-09server : update readme about token probs (#4777)Behnam M
2024-01-09server : add api-key flag to documentation (#4832)Zsapi
2024-01-04server : fix options in README.md (#4765)Michael Coppola
2023-12-29server : allow to generate multimodal embeddings (#4681)Karthik Sethuraman
2023-12-23server : allow to specify custom prompt for penalty calculation (#3727)Alexey Parfenov
2023-12-10Update README.md (#4388)Yueh-Po Peng
2023-11-25server : OAI API compatibility (#4198)Georgi Gerganov
2023-11-08server : add min_p param (#3877)Mihai
2023-11-05server : fix typo for --alias shortcut from -m to -a (#3958)Thái Hoàng Tâm
2023-10-22server : parallel decoding and multimodal (#3677)Georgi Gerganov
2023-10-17editorconfig : remove trailing spacesGeorgi Gerganov
2023-10-17server : documentation of JSON return value of /completion endpoint (#3632)coezbek
2023-10-06server : docs fix default values and add n_probs (#3506)Mihai
2023-10-02infill : add new example + extend server API (#3296)vvhg1
2023-09-28llama.cpp : split llama_context_params into model and context params (#3301)slaren
2023-08-27server : add `/detokenize` endpoint (#2802)Bruce MacDonald
2023-08-26examples : skip unnecessary external lib in server README.md how-to (#2804)lon
2023-08-23server : allow json array in prompt or content for direct token input (#2306)Xiao-Yong Jin
2023-08-21gguf : new file format with flexible meta data (beta) (#2398)Georgi Gerganov
2023-08-14server : add --numa support (#2524)Cheng Shao
2023-08-08Allow passing grammar to completion endpoint (#2532)Martin Krasser
2023-08-01fix a typo in examples/server/README.md (#2478)Bono Lv
2023-07-15llama : add custom RoPE (#2054)Xiao-Yong Jin
2023-07-13Revert "Support using mmap when applying LoRA (#2095)" (#2206)Howard Su
2023-07-11Support using mmap when applying LoRA (#2095)Howard Su
2023-07-06convert : update for baichuan (#2081)Judd
2023-07-05Expose generation timings from server & update completions.js (#2116)Tobias Lütke
2023-07-05Update Server Instructions (#2113)Jesse Jojo Johnson
2023-07-05Update server instructions for web front end (#2103)Jesse Jojo Johnson
2023-07-04Add an API example using server.cpp similar to OAI. (#2009)jwj7140
2023-06-29Use unsigned for random seed (#2006)Howard Su
2023-06-20[Fix] Reenable server embedding endpoint (#1937)Henri Vasserman
2023-06-17Server Example Refactor and Improvements (#1570)Randall Fitzgerald
2023-06-15readme : server compile flag (#1874)Srinivas Billa
2023-06-14CUDA full GPU acceleration, KV cache in VRAM (#1827)Johannes Gäßler
2023-06-06Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)Johannes Gäßler
2023-05-28Only show -ngl option when relevant + other doc/arg handling updates (#1625)Kerfuffle
2023-05-21examples : add server example with REST API (#1443)Steward Garcia