index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
server
/
server.cpp
Age
Commit message (
Expand
)
Author
2024-03-25
Server: clean up OAI params parsing function (#6284)
Xuan Son Nguyen
2024-03-23
common: llama_load_model_from_url split support (#6192)
Pierrick Hymbert
2024-03-22
json-schema-to-grammar : fix order of props + non-str const/enum (#6232)
Olivier Chafik
2024-03-22
server : fix n_keep always showing as 0 in response (#6211)
Jan Boon
2024-03-22
server : enable continuous batching by default (#6231)
Georgi Gerganov
2024-03-21
json-schema-to-grammar improvements (+ added to server) (#5978)
Olivier Chafik
2024-03-17
common: llama_load_model_from_url using --model-url (#6098)
Pierrick Hymbert
2024-03-13
llama : add pipeline parallelism support (#6017)
slaren
2024-03-13
Server: Use multi-task for embeddings endpoint (#6001)
Xuan Son Nguyen
2024-03-11
Server: format error to json (#5961)
Xuan Son Nguyen
2024-03-11
server : maintain chat completion id for streaming responses (#5988)
Minsoo Cheong
2024-03-09
server: benchmark: chat/completions scenario and other llm servers comparison...
Pierrick Hymbert
2024-03-09
server : print chat template info
Georgi Gerganov
2024-03-09
server : fix metrics init (#5964)
Georgi Gerganov
2024-03-09
server : normalize embeddings (#5956)
SeungWon Jeong
2024-03-09
server : fix passing prompt as tokens (#5955)
Alexey Parfenov
2024-03-09
server : simplify logic for empty prompts (#5953)
Georgi Gerganov
2024-03-09
Server: reorganize some http logic (#5939)
Xuan Son Nguyen
2024-03-09
server : add SSL support (#5926)
Gabe Goodhart
2024-03-09
server: tests: add truncated prompt tests, better kv cache size (#5933)
Pierrick Hymbert
2024-03-08
llama : support Mamba Selective State Space Models (#5328)
compilade
2024-03-08
server: metrics: add llamacpp:prompt_seconds_total and llamacpp:tokens_predic...
Pierrick Hymbert
2024-03-08
server : fix EOS token detection with disabled cache (#5938)
Georgi Gerganov
2024-03-07
server : add `/v1/completions` endpoint (#5914)
Minsoo Cheong
2024-03-07
server : refactor (#5882)
Georgi Gerganov
2024-03-04
llama : fix embeddings (#5796)
Georgi Gerganov
2024-03-04
add alias for chat template (#5858)
Xuan Son Nguyen
2024-03-03
server : init http requests thread pool with --parallel if set (#5836)
Pierrick Hymbert
2024-03-02
server: tests: passkey challenge / self-extend with context shift demo (#5832)
Pierrick Hymbert
2024-03-01
llama : cleanup unused mmq flags (#5772)
Pierrick Hymbert
2024-03-01
server: allow to override threads server pool with --threads-http (#5794)
Pierrick Hymbert
2024-03-01
server : fix newlines in help (#5785)
Georgi Gerganov
2024-02-29
Server: normalize naming (#5779)
Xuan Son Nguyen
2024-02-28
server : hit Ctrl+C twice to exit (#5734)
Xuan Son Nguyen
2024-02-28
server : add "/chat/completions" alias for "/v1/...` (#5722)
Jorge A
2024-02-26
fix server hangs on empty prompt (#5733)
Xuan Son Nguyen
2024-02-25
llama : refactor k-shift implementation + KV defragmentation (#5691)
Georgi Gerganov
2024-02-25
server : fix crash when system prompt is bigger than batch size (#5714)
compilade
2024-02-25
server: logs - unified format and --log-format option (#5700)
Pierrick Hymbert
2024-02-25
server: concurrency fix + monitoring - add /metrics prometheus compatible end...
Pierrick Hymbert
2024-02-25
code : normalize enum names (#5697)
Georgi Gerganov
2024-02-24
server: continue to update other slots on embedding concurrent request (#5699)
Pierrick Hymbert
2024-02-24
server: init functional tests (#5566)
Pierrick Hymbert
2024-02-23
server : add KV cache quantization options (#5684)
AlpinDale
2024-02-22
server : fallback to chatml, add AlphaMonarch chat template (#5628)
Xuan Son Nguyen
2024-02-21
examples : do not assume BOS when shifting context (#5622)
Jared Van Bortel
2024-02-21
server: health: fix race condition on slots data using tasks queue (#5634)
Pierrick Hymbert
2024-02-20
server : support llava 1.6 (#5553)
CJ Pais
2024-02-20
Server: use llama_chat_apply_template (#5593)
Xuan Son Nguyen
2024-02-20
server : health endpoint configurable failure on no slot (#5594)
Pierrick Hymbert
[prev]
[next]