index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
server
/
README.md
Age
Commit message (
Expand
)
Author
2024-02-07
server : update `/props` with "total_slots" value (#5373)
Justin Parker
2024-02-06
server : add `dynatemp_range` and `dynatemp_exponent` (#5352)
Michael Coppola
2024-02-05
server : allow to get default generation settings for completion (#5307)
Alexey Parfenov
2024-01-30
server : improve README (#5209)
Wu Jian Ping
2024-01-28
docker : add server-first container images (#5157)
Kyle Mistele
2024-01-27
server : add self-extend support (#5104)
Maximilian Winter
2024-01-11
server : support for multiple api keys (#4864)
Michael Coppola
2024-01-11
server : update readme to document the new `/health` endpoint (#4866)
Behnam M
2024-01-09
server : update readme about token probs (#4777)
Behnam M
2024-01-09
server : add api-key flag to documentation (#4832)
Zsapi
2024-01-04
server : fix options in README.md (#4765)
Michael Coppola
2023-12-29
server : allow to generate multimodal embeddings (#4681)
Karthik Sethuraman
2023-12-23
server : allow to specify custom prompt for penalty calculation (#3727)
Alexey Parfenov
2023-12-10
Update README.md (#4388)
Yueh-Po Peng
2023-11-25
server : OAI API compatibility (#4198)
Georgi Gerganov
2023-11-08
server : add min_p param (#3877)
Mihai
2023-11-05
server : fix typo for --alias shortcut from -m to -a (#3958)
Thái Hoàng Tâm
2023-10-22
server : parallel decoding and multimodal (#3677)
Georgi Gerganov
2023-10-17
editorconfig : remove trailing spaces
Georgi Gerganov
2023-10-17
server : documentation of JSON return value of /completion endpoint (#3632)
coezbek
2023-10-06
server : docs fix default values and add n_probs (#3506)
Mihai
2023-10-02
infill : add new example + extend server API (#3296)
vvhg1
2023-09-28
llama.cpp : split llama_context_params into model and context params (#3301)
slaren
2023-08-27
server : add `/detokenize` endpoint (#2802)
Bruce MacDonald
2023-08-26
examples : skip unnecessary external lib in server README.md how-to (#2804)
lon
2023-08-23
server : allow json array in prompt or content for direct token input (#2306)
Xiao-Yong Jin
2023-08-21
gguf : new file format with flexible meta data (beta) (#2398)
Georgi Gerganov
2023-08-14
server : add --numa support (#2524)
Cheng Shao
2023-08-08
Allow passing grammar to completion endpoint (#2532)
Martin Krasser
2023-08-01
fix a typo in examples/server/README.md (#2478)
Bono Lv
2023-07-15
llama : add custom RoPE (#2054)
Xiao-Yong Jin
2023-07-13
Revert "Support using mmap when applying LoRA (#2095)" (#2206)
Howard Su
2023-07-11
Support using mmap when applying LoRA (#2095)
Howard Su
2023-07-06
convert : update for baichuan (#2081)
Judd
2023-07-05
Expose generation timings from server & update completions.js (#2116)
Tobias Lütke
2023-07-05
Update Server Instructions (#2113)
Jesse Jojo Johnson
2023-07-05
Update server instructions for web front end (#2103)
Jesse Jojo Johnson
2023-07-04
Add an API example using server.cpp similar to OAI. (#2009)
jwj7140
2023-06-29
Use unsigned for random seed (#2006)
Howard Su
2023-06-20
[Fix] Reenable server embedding endpoint (#1937)
Henri Vasserman
2023-06-17
Server Example Refactor and Improvements (#1570)
Randall Fitzgerald
2023-06-15
readme : server compile flag (#1874)
Srinivas Billa
2023-06-14
CUDA full GPU acceleration, KV cache in VRAM (#1827)
Johannes Gäßler
2023-06-06
Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)
Johannes Gäßler
2023-05-28
Only show -ngl option when relevant + other doc/arg handling updates (#1625)
Kerfuffle
2023-05-21
examples : add server example with REST API (#1443)
Steward Garcia