index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
server
/
server.cpp
Age
Commit message (
Expand
)
Author
2024-07-27
Merge mainline llama.cpp (#3)
Kawrakow
2024-06-20
server : fix smart slot selection (#8020)
sasha0552
2024-06-18
Only use FIM middle token if it exists (#7648)
Sigbjørn Skjæret
2024-06-12
server : restore numeric prompts (#7883)
Georgi Gerganov
2024-06-10
server : improve "prompt" handling (#7847)
Georgi Gerganov
2024-06-08
server : smart slot selection using Longest Common Prefix (#7728)
sasha0552
2024-06-07
server : do not get prompt in infill mode (#7286)
woodx
2024-06-06
imatrix : migrate to gpt_params (#7771)
Georgi Gerganov
2024-06-04
common : refactor cli arg parsing (#7675)
Georgi Gerganov
2024-06-01
server : new UI (#7633)
Yazan Agha-Schrader
2024-05-22
common : normalize naming style (#7462)
Georgi Gerganov
2024-05-20
server : return error on too large embedding input (#7389)
Georgi Gerganov
2024-05-19
server: fix seed being reported back (#7382)
Johannes Gäßler
2024-05-17
server : add support for the RPC backend (#7305)
Radoslav Gerganov
2024-05-14
server: free sampling contexts on exit (#7264)
Steve Grubb
2024-05-11
fix system prompt handling (#7153)
Xuan Son Nguyen
2024-05-11
server : free llama_batch on exit (#7212)
Steve Grubb
2024-05-11
server: fix reported top tokens for temperature 0 (#7203)
Johannes Gäßler
2024-05-08
JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143)
Johannes Gäßler
2024-05-08
server : add_special option for tokenize endpoint (#7059)
Johan
2024-05-07
server: fix incorrectly reported token probabilities (#7125)
Johannes Gäßler
2024-05-04
If first token generated from the server is the stop word the server will cra...
maor-ps
2024-04-30
ggml : add Flash Attention (#5021)
Georgi Gerganov
2024-04-30
Improve usability of --model-url & related flags (#6930)
Olivier Chafik
2024-04-27
ci: server: tests python env on github container ubuntu latest / fix n_predic...
Pierrick Hymbert
2024-04-26
quantize: add imatrix and dataset metadata in GGUF (#6658)
Pierrick Hymbert
2024-04-26
server: stop generation at `n_ctx_train` if `n_predict` is not set (#6638)
Pierrick Hymbert
2024-04-24
common : revert showing control tokens by default for server (#6860)
Kyle Mistele
2024-04-24
Server: fix seed for multiple slots (#6835)
Johannes Gäßler
2024-04-21
llama : support Llama 3 HF conversion (#6745)
Pedro Cuenca
2024-04-12
JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings,...
Olivier Chafik
2024-04-12
server : coherent log output for KV cache full (#6637)
Pierrick Hymbert
2024-04-09
BERT tokenizer fixes (#6498)
Jared Van Bortel
2024-04-08
llama : save and restore kv cache for single seq id (#6341)
Jan Boon
2024-04-04
server : remove obsolete --memory-f32 option
Georgi Gerganov
2024-04-04
server : add option to disable KV offload (#6468)
Xiao-Yong Jin
2024-03-28
server : stop gracefully on SIGTERM (#6348)
Eric Zhang
2024-03-26
llama : greatly reduce output buffer memory usage (#6122)
compilade
2024-03-26
server : add `n_discard` parameter (#6300)
Jan Boon
2024-03-26
cuda : rename build flag to LLAMA_CUDA (#6299)
slaren
2024-03-25
Server: clean up OAI params parsing function (#6284)
Xuan Son Nguyen
2024-03-23
common: llama_load_model_from_url split support (#6192)
Pierrick Hymbert
2024-03-22
json-schema-to-grammar : fix order of props + non-str const/enum (#6232)
Olivier Chafik
2024-03-22
server : fix n_keep always showing as 0 in response (#6211)
Jan Boon
2024-03-22
server : enable continuous batching by default (#6231)
Georgi Gerganov
2024-03-21
json-schema-to-grammar improvements (+ added to server) (#5978)
Olivier Chafik
2024-03-17
common: llama_load_model_from_url using --model-url (#6098)
Pierrick Hymbert
2024-03-13
llama : add pipeline parallelism support (#6017)
slaren
2024-03-13
Server: Use multi-task for embeddings endpoint (#6001)
Xuan Son Nguyen
2024-03-11
Server: format error to json (#5961)
Xuan Son Nguyen
[next]