index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
server
/
server.cpp
Age
Commit message (
Expand
)
Author
2024-02-14
llava : support v1.6 (#5267)
John
2024-02-11
server : allow to specify tokens as strings in logit_bias (#5003)
Alexey Parfenov
2024-02-11
server : add llama2 chat template (#5425)
Xuan Son Nguyen
2024-02-09
server : fix prompt caching for repeated prompts (#5420)
Riley Stewart
2024-02-07
server : update `/props` with "total_slots" value (#5373)
Justin Parker
2024-02-06
server : remove model.json endpoint (#5371)
Alexey Parfenov
2024-02-06
server : include total "num_slots" in props endpoint (#5349)
Justin Parker
2024-02-06
server : add `dynatemp_range` and `dynatemp_exponent` (#5352)
Michael Coppola
2024-02-06
server : various fixes for the prompt field in /completion (#5300)
Niall Coates
2024-02-05
server : allow to get default generation settings for completion (#5307)
Alexey Parfenov
2024-02-03
refactor : switch to emplace_back to avoid extra object (#5291)
Michael Klimenko
2024-01-31
llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240)
Georgi Gerganov
2024-01-30
server : fix context shift (#5195)
Georgi Gerganov
2024-01-29
server : embeddings compatibility for OpenAI (#5190)
Wu Jian Ping
2024-01-28
ggml : add unified SYCL backend for Intel GPUs (#2690)
Abhilash Majumder
2024-01-27
Remove unused data and add fixes (#5154)
Michael Klimenko
2024-01-27
server : add self-extend support (#5104)
Maximilian Winter
2024-01-26
server : refactored the task processing logic (#5065)
Xuan Son Nguyen
2024-01-18
server : defer tasks when "slot unavailable" (#5018)
Xuan Son Nguyen
2024-01-13
server : fix prompt caching with system prompt (#4914)
Georgi Gerganov
2024-01-13
server : fix deadlock that occurs in multi-prompt scenarios (#4905)
Ziad Ben Hadj-Alouane
2024-01-13
server : fix crash with multimodal models without BOS token (#4904)
makomk
2024-01-12
llama : ggml-backend integration (#4766)
slaren
2024-01-11
server : fix infill when prompt is empty (#4833)
Georgi Gerganov
2024-01-11
server : implement credentialed CORS (#4514)
Laura
2024-01-11
server : support for multiple api keys (#4864)
Michael Coppola
2024-01-11
server : add `LOG_INFO` when model is successfully loaded (#4881)
Behnam M
2024-01-11
server : fix typo in model name (#4876)
Isaac McFadyen
2024-01-11
server : fix build + rename enums (#4870)
Georgi Gerganov
2024-01-10
server : add a `/health` endpoint (#4860)
Behnam M
2024-01-07
server : fix n_predict check (#4798)
Georgi Gerganov
2024-01-04
server : send token probs for "stream == false" (#4714)
Georgi Gerganov
2024-01-02
editorconfig : fix whitespace and indentation #4710
Georgi Gerganov
2024-01-02
server : add --override-kv parameter (#4710)
minarchist
2023-12-30
clip : refactor + bug fixes (#4696)
Georgi Gerganov
2023-12-29
server : replace sleep with condition variables (#4673)
Justine Tunney
2023-12-29
server : fix OpenAI server sampling w.r.t. penalty. (#4675)
SakuraUmi
2023-12-29
server : allow to generate multimodal embeddings (#4681)
Karthik Sethuraman
2023-12-28
Fix OpenAI server sampling w.r.t. temp and seed (#4668)
Justine Tunney
2023-12-23
server : allow to specify custom prompt for penalty calculation (#3727)
Alexey Parfenov
2023-12-17
server : disable llm logs if SERVER_VERBOSE is off (#3792)
olexiyb
2023-12-17
server : fix grammar being ignored (#4494)
AdithyanI
2023-12-17
server : fix possible ambiguity in content type charset (#4501)
Alexey Parfenov
2023-12-17
server : allow requests larger than 8K (#4500)
mzcu
2023-12-15
server : add optional API Key Authentication example (#4441)
ShadovvBeast
2023-12-13
server : fix handling of characters that span multiple tokens when streaming ...
shibe2
2023-12-12
server : fix local model name in server (#4420)
Vladimir Zorin
2023-12-07
llama : per-layer KV cache + quantum K cache (#4309)
Georgi Gerganov
2023-12-06
server : recognize cache_prompt parameter in OAI API (#4347)
Georgi Gerganov
2023-12-03
server : fix OpenAI API `stop` field to be optional (#4299)
Ed Lee
[next]