index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
server
/
server.cpp
Age
Commit message (
Expand
)
Author
2023-09-07
fix some warnings from gcc and clang-tidy (#3038)
Cebtenzzre
2023-09-05
examples : replace fprintf to stdout with printf (#3017)
Cebtenzzre
2023-09-02
server : avoid aniprompt in probabilities of final response (#2849)
Jhen-Jie Hong
2023-09-01
build : fix most gcc and clang warnings (#2861)
Cebtenzzre
2023-08-28
YAML result logging + preset script (#2657)
Johannes Gäßler
2023-08-27
llama : more tokenizer fixes (#2810)
Georgi Gerganov
2023-08-27
server : add `/detokenize` endpoint (#2802)
Bruce MacDonald
2023-08-25
llama : add llama_beam_search() (#2267)
Matt Pulver
2023-08-25
server : display token probabilities in the UI (#2489)
Jhen-Jie Hong
2023-08-23
server : allow json array in prompt or content for direct token input (#2306)
Xiao-Yong Jin
2023-08-22
CUDA: use mul_mat_q kernels by default (#2683)
Johannes Gäßler
2023-08-22
server : fallback to default if client param is null (#2688)
Jhen-Jie Hong
2023-08-21
gguf : new file format with flexible meta data (beta) (#2398)
Georgi Gerganov
2023-08-15
server : add missing /json-schema-to-grammar.mjs (#2616)
Jhen-Jie Hong
2023-08-14
server : add --numa support (#2524)
Cheng Shao
2023-08-12
server: fixed wrong variable name in timing json (#2579)
Equim
2023-08-10
Fix grammar-based sampling issue in server (#2566)
Martin Krasser
2023-08-08
Allow passing grammar to completion endpoint (#2532)
Martin Krasser
2023-08-04
Fixing race condition in server and partial stream handling in frontend. (#2391)
Stephen Nichols
2023-07-31
CUDA: mmq CLI option, fixed mmq build issues (#2453)
Johannes Gäßler
2023-07-25
server: add rms_norm_eps parameter (#2380)
slaren
2023-07-23
Add gqa parameter support to the server (#2351)
IgnacioFDM
2023-07-15
llama : add custom RoPE (#2054)
Xiao-Yong Jin
2023-07-13
Revert "Support using mmap when applying LoRA (#2095)" (#2206)
Howard Su
2023-07-11
Support using mmap when applying LoRA (#2095)
Howard Su
2023-07-10
mpi : add support for distributed inference via MPI (#2099)
Evan Miller
2023-07-05
Expose generation timings from server & update completions.js (#2116)
Tobias Lütke
2023-07-04
Add an API example using server.cpp similar to OAI. (#2009)
jwj7140
2023-07-04
Simple webchat for server (#1998)
Tobias Lütke
2023-07-04
fix server crashes (#2076)
Henri Vasserman
2023-07-03
server: add option to output probabilities for completion (#1962)
WangHaoranRobin
2023-06-26
ggml : add NUMA support (#1556)
zrm
2023-06-25
fix server sampling: top k sampler first (#1977)
anon998
2023-06-24
llama : make model stateless and context stateful (llama_state) (#1797)
Didzis Gosko
2023-06-20
[Fix] Reenable server embedding endpoint (#1937)
Henri Vasserman
2023-06-17
Server Example Refactor and Improvements (#1570)
Randall Fitzgerald
2023-06-14
CUDA full GPU acceleration, KV cache in VRAM (#1827)
Johannes Gäßler
2023-06-06
Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)
Johannes Gäßler
2023-05-28
Only show -ngl option when relevant + other doc/arg handling updates (#1625)
Kerfuffle
2023-05-28
examples : add --alias option to gpt_params to set use friendly model name (#...
Vladimir Zorin
2023-05-27
Include server in releases + other build system cleanups (#1610)
Kerfuffle
2023-05-21
examples : add server example with REST API (#1443)
Steward Garcia