index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
server
/
server.cpp
Age
Commit message (
Expand
)
Author
2023-11-02
build : link against build info instead of compiling against it (#3879)
cebtenzzre
2023-11-01
llama : implement YaRN RoPE scaling (#2268)
cebtenzzre
2023-11-01
server : re-enable completion and embedded at the same time (#3876)
Adrian Hesketh
2023-10-29
Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)
Kerfuffle
2023-10-26
server : do not release slot on image input (#3798)
Georgi Gerganov
2023-10-24
server : add parameter -tb N, --threads-batch N (#3584) (#3768)
cebtenzzre
2023-10-24
server : do not block system prompt update (#3767)
Georgi Gerganov
2023-10-23
llama : remove token functions with `context` args in favor of `model` (#3720)
Marcus Dunn
2023-10-22
server : parallel decoding and multimodal (#3677)
Georgi Gerganov
2023-10-20
sampling : refactor init to use llama_sampling_params (#3696)
Georgi Gerganov
2023-10-20
server : fix uninitialized sampling context (close #3685)
Georgi Gerganov
2023-10-18
speculative : add tree-based sampling example (#3624)
Georgi Gerganov
2023-10-12
server : fix kv cache management (#3588)
Georgi Gerganov
2023-10-11
server : add parameter -tb N, --threads-batch N (#3584)
Michael Coppola
2023-10-11
common : fix mirostat state when using multiple sequences (#3543)
Kerfuffle
2023-10-10
infill. : fix tokenization (#3508)
vvhg1
2023-10-06
server : reuse llama_sample_token common util (#3494)
Jhen-Jie Hong
2023-10-05
build : use std::make_tuple() for compatibility with older GCC versions (#3488)
Kenvix ⭐
2023-10-05
server : fix incorrect num_tokens_predicted (#3480)
Jhen-Jie Hong
2023-10-03
llama : fix session saving/loading (#3400)
Georgi Gerganov
2023-10-02
infill : add new example + extend server API (#3296)
vvhg1
2023-09-28
llama.cpp : split llama_context_params into model and context params (#3301)
slaren
2023-09-28
train : finetune LORA (#2632)
xaedes
2023-09-28
llama : custom attention mask + parallel decoding + no context swaps (#3228)
Georgi Gerganov
2023-09-20
llama : allow gguf RoPE keys to be overridden with defaults (#3240)
Cebtenzzre
2023-09-15
check C++ code with -Wmissing-declarations (#3184)
Cebtenzzre
2023-09-07
fix some warnings from gcc and clang-tidy (#3038)
Cebtenzzre
2023-09-05
examples : replace fprintf to stdout with printf (#3017)
Cebtenzzre
2023-09-02
server : avoid aniprompt in probabilities of final response (#2849)
Jhen-Jie Hong
2023-09-01
build : fix most gcc and clang warnings (#2861)
Cebtenzzre
2023-08-28
YAML result logging + preset script (#2657)
Johannes Gäßler
2023-08-27
llama : more tokenizer fixes (#2810)
Georgi Gerganov
2023-08-27
server : add `/detokenize` endpoint (#2802)
Bruce MacDonald
2023-08-25
llama : add llama_beam_search() (#2267)
Matt Pulver
2023-08-25
server : display token probabilities in the UI (#2489)
Jhen-Jie Hong
2023-08-23
server : allow json array in prompt or content for direct token input (#2306)
Xiao-Yong Jin
2023-08-22
CUDA: use mul_mat_q kernels by default (#2683)
Johannes Gäßler
2023-08-22
server : fallback to default if client param is null (#2688)
Jhen-Jie Hong
2023-08-21
gguf : new file format with flexible meta data (beta) (#2398)
Georgi Gerganov
2023-08-15
server : add missing /json-schema-to-grammar.mjs (#2616)
Jhen-Jie Hong
2023-08-14
server : add --numa support (#2524)
Cheng Shao
2023-08-12
server: fixed wrong variable name in timing json (#2579)
Equim
2023-08-10
Fix grammar-based sampling issue in server (#2566)
Martin Krasser
2023-08-08
Allow passing grammar to completion endpoint (#2532)
Martin Krasser
2023-08-04
Fixing race condition in server and partial stream handling in frontend. (#2391)
Stephen Nichols
2023-07-31
CUDA: mmq CLI option, fixed mmq build issues (#2453)
Johannes Gäßler
2023-07-25
server: add rms_norm_eps parameter (#2380)
slaren
2023-07-23
Add gqa parameter support to the server (#2351)
IgnacioFDM
2023-07-15
llama : add custom RoPE (#2054)
Xiao-Yong Jin
2023-07-13
Revert "Support using mmap when applying LoRA (#2095)" (#2206)
Howard Su
[next]