summaryrefslogtreecommitdiff
path: root/examples/server/server.cpp
AgeCommit message (Expand)Author
2023-09-05examples : replace fprintf to stdout with printf (#3017)Cebtenzzre
2023-09-02server : avoid aniprompt in probabilities of final response (#2849)Jhen-Jie Hong
2023-09-01build : fix most gcc and clang warnings (#2861)Cebtenzzre
2023-08-28YAML result logging + preset script (#2657)Johannes Gäßler
2023-08-27llama : more tokenizer fixes (#2810)Georgi Gerganov
2023-08-27server : add `/detokenize` endpoint (#2802)Bruce MacDonald
2023-08-25llama : add llama_beam_search() (#2267)Matt Pulver
2023-08-25server : display token probabilities in the UI (#2489)Jhen-Jie Hong
2023-08-23server : allow json array in prompt or content for direct token input (#2306)Xiao-Yong Jin
2023-08-22CUDA: use mul_mat_q kernels by default (#2683)Johannes Gäßler
2023-08-22server : fallback to default if client param is null (#2688)Jhen-Jie Hong
2023-08-21gguf : new file format with flexible meta data (beta) (#2398)Georgi Gerganov
2023-08-15server : add missing /json-schema-to-grammar.mjs (#2616)Jhen-Jie Hong
2023-08-14server : add --numa support (#2524)Cheng Shao
2023-08-12server: fixed wrong variable name in timing json (#2579)Equim
2023-08-10Fix grammar-based sampling issue in server (#2566)Martin Krasser
2023-08-08Allow passing grammar to completion endpoint (#2532)Martin Krasser
2023-08-04Fixing race condition in server and partial stream handling in frontend. (#2391)Stephen Nichols
2023-07-31CUDA: mmq CLI option, fixed mmq build issues (#2453)Johannes Gäßler
2023-07-25server: add rms_norm_eps parameter (#2380)slaren
2023-07-23Add gqa parameter support to the server (#2351)IgnacioFDM
2023-07-15llama : add custom RoPE (#2054)Xiao-Yong Jin
2023-07-13Revert "Support using mmap when applying LoRA (#2095)" (#2206)Howard Su
2023-07-11Support using mmap when applying LoRA (#2095)Howard Su
2023-07-10mpi : add support for distributed inference via MPI (#2099)Evan Miller
2023-07-05Expose generation timings from server & update completions.js (#2116)Tobias Lütke
2023-07-04Add an API example using server.cpp similar to OAI. (#2009)jwj7140
2023-07-04Simple webchat for server (#1998)Tobias Lütke
2023-07-04fix server crashes (#2076)Henri Vasserman
2023-07-03server: add option to output probabilities for completion (#1962)WangHaoranRobin
2023-06-26ggml : add NUMA support (#1556)zrm
2023-06-25fix server sampling: top k sampler first (#1977)anon998
2023-06-24llama : make model stateless and context stateful (llama_state) (#1797)Didzis Gosko
2023-06-20[Fix] Reenable server embedding endpoint (#1937)Henri Vasserman
2023-06-17Server Example Refactor and Improvements (#1570)Randall Fitzgerald
2023-06-14CUDA full GPU acceleration, KV cache in VRAM (#1827)Johannes Gäßler
2023-06-06Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)Johannes Gäßler
2023-05-28Only show -ngl option when relevant + other doc/arg handling updates (#1625)Kerfuffle
2023-05-28examples : add --alias option to gpt_params to set use friendly model name (#...Vladimir Zorin
2023-05-27Include server in releases + other build system cleanups (#1610)Kerfuffle
2023-05-21examples : add server example with REST API (#1443)Steward Garcia