index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
server
Age
Commit message (
Expand
)
Author
2024-01-11
server : implement credentialed CORS (#4514)
Laura
2024-01-11
server : support for multiple api keys (#4864)
Michael Coppola
2024-01-11
server : add `LOG_INFO` when model is successfully loaded (#4881)
Behnam M
2024-01-11
server : fix typo in model name (#4876)
Isaac McFadyen
2024-01-11
server : update readme to document the new `/health` endpoint (#4866)
Behnam M
2024-01-11
server : fix build + rename enums (#4870)
Georgi Gerganov
2024-01-10
server : add a `/health` endpoint (#4860)
Behnam M
2024-01-09
server : update readme about token probs (#4777)
Behnam M
2024-01-09
server : add api-key flag to documentation (#4832)
Zsapi
2024-01-07
server : fix n_predict check (#4798)
Georgi Gerganov
2024-01-04
server : send token probs for "stream == false" (#4714)
Georgi Gerganov
2024-01-04
server : fix options in README.md (#4765)
Michael Coppola
2024-01-03
server : throw an error when `slot unavailable` (#4741)
Justin Parker
2024-01-02
server : add token counts to html footer (#4738)
Phil H
2024-01-02
editorconfig : fix whitespace and indentation #4710
Georgi Gerganov
2024-01-02
server : add --override-kv parameter (#4710)
minarchist
2023-12-30
clip : refactor + bug fixes (#4696)
Georgi Gerganov
2023-12-29
cmake : fix ld warning duplicate libraries libllama.a (#4671)
Cuong Trinh Manh
2023-12-29
server : replace sleep with condition variables (#4673)
Justine Tunney
2023-12-29
server : fix OpenAI server sampling w.r.t. penalty. (#4675)
SakuraUmi
2023-12-29
server : allow to generate multimodal embeddings (#4681)
Karthik Sethuraman
2023-12-28
Fix OpenAI server sampling w.r.t. temp and seed (#4668)
Justine Tunney
2023-12-23
server : allow to specify custom prompt for penalty calculation (#3727)
Alexey Parfenov
2023-12-17
server : disable llm logs if SERVER_VERBOSE is off (#3792)
olexiyb
2023-12-17
server : fix grammar being ignored (#4494)
AdithyanI
2023-12-17
server : fix possible ambiguity in content type charset (#4501)
Alexey Parfenov
2023-12-17
server : allow requests larger than 8K (#4500)
mzcu
2023-12-15
server : add optional API Key Authentication example (#4441)
ShadovvBeast
2023-12-13
server : fix handling of characters that span multiple tokens when streaming ...
shibe2
2023-12-12
server : tweak default sampling parameters (#4367)
kalomaze
2023-12-12
english : use `typos` to fix comments and logs (#4354)
Richard Kiss
2023-12-12
server : fix local model name in server (#4420)
Vladimir Zorin
2023-12-10
Update README.md (#4388)
Yueh-Po Peng
2023-12-07
llama : per-layer KV cache + quantum K cache (#4309)
Georgi Gerganov
2023-12-06
server : recognize cache_prompt parameter in OAI API (#4347)
Georgi Gerganov
2023-12-03
server : fix OpenAI API `stop` field to be optional (#4299)
Ed Lee
2023-12-03
py : add grammar to oai like api (#4294)
Rickard Edén
2023-12-01
llama : support optional tensors (#4283)
Georgi Gerganov
2023-12-01
server : add --log-disable to disable logging to file (#4260)
Ziad Ben Hadj-Alouane
2023-12-01
server : add single-client multi-prompt support (#4232)
Ziad Ben Hadj-Alouane
2023-11-30
py : fix oai proxy (#3972)
rhjdvsgsgks
2023-11-25
server : OAI API compatibility (#4198)
Georgi Gerganov
2023-11-23
Fix incorrect format strings and uninitialized variables. (#4133)
Haohui Mai
2023-11-19
server : relay error messages (#4131)
SoftwareRenderer
2023-11-16
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
Kerfuffle
2023-11-10
server : fix crash when prompt exceeds context size (#3996)
Alexey Parfenov
2023-11-10
server : allow continue edit on completion mode (#3950)
Jhen-Jie Hong
2023-11-08
server : add min_p param (#3877)
Mihai
2023-11-07
llava : expose as a shared library for downstream projects (#3613)
Damian Stewart
2023-11-05
server : fix typo for --alias shortcut from -m to -a (#3958)
Thái Hoàng Tâm
[next]