index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
server
/
server.cpp
Age
Commit message (
Expand
)
Author
2023-12-29
server : replace sleep with condition variables (#4673)
Justine Tunney
2023-12-29
server : fix OpenAI server sampling w.r.t. penalty. (#4675)
SakuraUmi
2023-12-29
server : allow to generate multimodal embeddings (#4681)
Karthik Sethuraman
2023-12-28
Fix OpenAI server sampling w.r.t. temp and seed (#4668)
Justine Tunney
2023-12-23
server : allow to specify custom prompt for penalty calculation (#3727)
Alexey Parfenov
2023-12-17
server : disable llm logs if SERVER_VERBOSE is off (#3792)
olexiyb
2023-12-17
server : fix grammar being ignored (#4494)
AdithyanI
2023-12-17
server : fix possible ambiguity in content type charset (#4501)
Alexey Parfenov
2023-12-17
server : allow requests larger than 8K (#4500)
mzcu
2023-12-15
server : add optional API Key Authentication example (#4441)
ShadovvBeast
2023-12-13
server : fix handling of characters that span multiple tokens when streaming ...
shibe2
2023-12-12
server : fix local model name in server (#4420)
Vladimir Zorin
2023-12-07
llama : per-layer KV cache + quantum K cache (#4309)
Georgi Gerganov
2023-12-06
server : recognize cache_prompt parameter in OAI API (#4347)
Georgi Gerganov
2023-12-03
server : fix OpenAI API `stop` field to be optional (#4299)
Ed Lee
2023-12-01
llama : support optional tensors (#4283)
Georgi Gerganov
2023-12-01
server : add --log-disable to disable logging to file (#4260)
Ziad Ben Hadj-Alouane
2023-12-01
server : add single-client multi-prompt support (#4232)
Ziad Ben Hadj-Alouane
2023-11-25
server : OAI API compatibility (#4198)
Georgi Gerganov
2023-11-23
Fix incorrect format strings and uninitialized variables. (#4133)
Haohui Mai
2023-11-19
server : relay error messages (#4131)
SoftwareRenderer
2023-11-16
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
Kerfuffle
2023-11-10
server : fix crash when prompt exceeds context size (#3996)
Alexey Parfenov
2023-11-08
server : add min_p param (#3877)
Mihai
2023-11-02
build : link against build info instead of compiling against it (#3879)
cebtenzzre
2023-11-01
llama : implement YaRN RoPE scaling (#2268)
cebtenzzre
2023-11-01
server : re-enable completion and embedded at the same time (#3876)
Adrian Hesketh
2023-10-29
Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)
Kerfuffle
2023-10-26
server : do not release slot on image input (#3798)
Georgi Gerganov
2023-10-24
server : add parameter -tb N, --threads-batch N (#3584) (#3768)
cebtenzzre
2023-10-24
server : do not block system prompt update (#3767)
Georgi Gerganov
2023-10-23
llama : remove token functions with `context` args in favor of `model` (#3720)
Marcus Dunn
2023-10-22
server : parallel decoding and multimodal (#3677)
Georgi Gerganov
2023-10-20
sampling : refactor init to use llama_sampling_params (#3696)
Georgi Gerganov
2023-10-20
server : fix uninitialized sampling context (close #3685)
Georgi Gerganov
2023-10-18
speculative : add tree-based sampling example (#3624)
Georgi Gerganov
2023-10-12
server : fix kv cache management (#3588)
Georgi Gerganov
2023-10-11
server : add parameter -tb N, --threads-batch N (#3584)
Michael Coppola
2023-10-11
common : fix mirostat state when using multiple sequences (#3543)
Kerfuffle
2023-10-10
infill. : fix tokenization (#3508)
vvhg1
2023-10-06
server : reuse llama_sample_token common util (#3494)
Jhen-Jie Hong
2023-10-05
build : use std::make_tuple() for compatibility with older GCC versions (#3488)
Kenvix ⭐
2023-10-05
server : fix incorrect num_tokens_predicted (#3480)
Jhen-Jie Hong
2023-10-03
llama : fix session saving/loading (#3400)
Georgi Gerganov
2023-10-02
infill : add new example + extend server API (#3296)
vvhg1
2023-09-28
llama.cpp : split llama_context_params into model and context params (#3301)
slaren
2023-09-28
train : finetune LORA (#2632)
xaedes
2023-09-28
llama : custom attention mask + parallel decoding + no context swaps (#3228)
Georgi Gerganov
2023-09-20
llama : allow gguf RoPE keys to be overridden with defaults (#3240)
Cebtenzzre
2023-09-15
check C++ code with -Wmissing-declarations (#3184)
Cebtenzzre
[next]