index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
server
Age
Commit message (
Expand
)
Author
2023-09-07
fix some warnings from gcc and clang-tidy (#3038)
Cebtenzzre
2023-09-05
examples : replace fprintf to stdout with printf (#3017)
Cebtenzzre
2023-09-04
server : add a subtle loading animation to the edit box (#2466)
Aarni Koskela
2023-09-02
server : avoid aniprompt in probabilities of final response (#2849)
Jhen-Jie Hong
2023-09-01
build : fix most gcc and clang warnings (#2861)
Cebtenzzre
2023-08-28
YAML result logging + preset script (#2657)
Johannes Gäßler
2023-08-27
llama : more tokenizer fixes (#2810)
Georgi Gerganov
2023-08-27
server : add `/detokenize` endpoint (#2802)
Bruce MacDonald
2023-08-26
examples : skip unnecessary external lib in server README.md how-to (#2804)
lon
2023-08-25
llama : add llama_beam_search() (#2267)
Matt Pulver
2023-08-25
server : display token probabilities in the UI (#2489)
Jhen-Jie Hong
2023-08-23
chmod : make scripts executable (#2675)
Cebtenzzre
2023-08-23
server : allow json array in prompt or content for direct token input (#2306)
Xiao-Yong Jin
2023-08-22
CUDA: use mul_mat_q kernels by default (#2683)
Johannes Gäßler
2023-08-22
server : fallback to default if client param is null (#2688)
Jhen-Jie Hong
2023-08-21
gguf : new file format with flexible meta data (beta) (#2398)
Georgi Gerganov
2023-08-19
server : better default prompt (#2646)
Georgi Gerganov
2023-08-19
server : update xxd usage for older versions compatibility (#2649)
Jhen-Jie Hong
2023-08-18
server : support for saving templates in browser LocalStorage (#2486)
staviq
2023-08-15
server : add missing /json-schema-to-grammar.mjs (#2616)
Jhen-Jie Hong
2023-08-14
server : add --numa support (#2524)
Cheng Shao
2023-08-14
server : fix default grammar by use empty string in the UI (#2604)
Jhen-Jie Hong
2023-08-14
server : implement json-schema-to-grammar.mjs & add grammar param in the UI (...
Jhen-Jie Hong
2023-08-12
server: fixed wrong variable name in timing json (#2579)
Equim
2023-08-10
Fix grammar-based sampling issue in server (#2566)
Martin Krasser
2023-08-08
Allow passing grammar to completion endpoint (#2532)
Martin Krasser
2023-08-04
fix firefox autoscroll (#2519)
Jonas Wunderlich
2023-08-04
server: regenerate completion.js.hpp (#2515)
Cebtenzzre
2023-08-04
Fixing race condition in server and partial stream handling in frontend. (#2391)
Stephen Nichols
2023-08-01
fix a typo in examples/server/README.md (#2478)
Bono Lv
2023-08-01
server : Support dark mode (#2414)
ebraminio
2023-07-31
CUDA: mmq CLI option, fixed mmq build issues (#2453)
Johannes Gäßler
2023-07-28
examples : server chat mode with llama2 (#2400)
nhamanasu
2023-07-25
server: add rms_norm_eps parameter (#2380)
slaren
2023-07-25
[Server] Escape HTML in webchat (#2368)
Henri Vasserman
2023-07-24
Chat UI extras (#2366)
Aarni Koskela
2023-07-23
Add gqa parameter support to the server (#2351)
IgnacioFDM
2023-07-21
make : fix embdinput library and server examples building on MSYS2 (#2235)
Przemysław Pawełczyk
2023-07-19
cmake : install targets (#2256)
wzy
2023-07-15
llama : add custom RoPE (#2054)
Xiao-Yong Jin
2023-07-13
Revert "Support using mmap when applying LoRA (#2095)" (#2206)
Howard Su
2023-07-11
Support using mmap when applying LoRA (#2095)
Howard Su
2023-07-10
mpi : add support for distributed inference via MPI (#2099)
Evan Miller
2023-07-06
convert : update for baichuan (#2081)
Judd
2023-07-05
Expose generation timings from server & update completions.js (#2116)
Tobias Lütke
2023-07-05
Update Server Instructions (#2113)
Jesse Jojo Johnson
2023-07-05
Update server instructions for web front end (#2103)
Jesse Jojo Johnson
2023-07-04
Add an API example using server.cpp similar to OAI. (#2009)
jwj7140
2023-07-04
Simple webchat for server (#1998)
Tobias Lütke
2023-07-04
fix server crashes (#2076)
Henri Vasserman
[next]