summaryrefslogtreecommitdiff
path: root/examples/server/server.cpp
AgeCommit message (Expand)Author
2023-11-02build : link against build info instead of compiling against it (#3879)cebtenzzre
2023-11-01llama : implement YaRN RoPE scaling (#2268)cebtenzzre
2023-11-01server : re-enable completion and embedded at the same time (#3876)Adrian Hesketh
2023-10-29Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)Kerfuffle
2023-10-26server : do not release slot on image input (#3798)Georgi Gerganov
2023-10-24server : add parameter -tb N, --threads-batch N (#3584) (#3768)cebtenzzre
2023-10-24server : do not block system prompt update (#3767)Georgi Gerganov
2023-10-23llama : remove token functions with `context` args in favor of `model` (#3720)Marcus Dunn
2023-10-22server : parallel decoding and multimodal (#3677)Georgi Gerganov
2023-10-20sampling : refactor init to use llama_sampling_params (#3696)Georgi Gerganov
2023-10-20server : fix uninitialized sampling context (close #3685)Georgi Gerganov
2023-10-18speculative : add tree-based sampling example (#3624)Georgi Gerganov
2023-10-12server : fix kv cache management (#3588)Georgi Gerganov
2023-10-11server : add parameter -tb N, --threads-batch N (#3584)Michael Coppola
2023-10-11common : fix mirostat state when using multiple sequences (#3543)Kerfuffle
2023-10-10infill. : fix tokenization (#3508)vvhg1
2023-10-06server : reuse llama_sample_token common util (#3494)Jhen-Jie Hong
2023-10-05build : use std::make_tuple() for compatibility with older GCC versions (#3488)Kenvix ⭐
2023-10-05server : fix incorrect num_tokens_predicted (#3480)Jhen-Jie Hong
2023-10-03llama : fix session saving/loading (#3400)Georgi Gerganov
2023-10-02infill : add new example + extend server API (#3296)vvhg1
2023-09-28llama.cpp : split llama_context_params into model and context params (#3301)slaren
2023-09-28train : finetune LORA (#2632)xaedes
2023-09-28llama : custom attention mask + parallel decoding + no context swaps (#3228)Georgi Gerganov
2023-09-20llama : allow gguf RoPE keys to be overridden with defaults (#3240)Cebtenzzre
2023-09-15check C++ code with -Wmissing-declarations (#3184)Cebtenzzre
2023-09-07fix some warnings from gcc and clang-tidy (#3038)Cebtenzzre
2023-09-05examples : replace fprintf to stdout with printf (#3017)Cebtenzzre
2023-09-02server : avoid aniprompt in probabilities of final response (#2849)Jhen-Jie Hong
2023-09-01build : fix most gcc and clang warnings (#2861)Cebtenzzre
2023-08-28YAML result logging + preset script (#2657)Johannes Gäßler
2023-08-27llama : more tokenizer fixes (#2810)Georgi Gerganov
2023-08-27server : add `/detokenize` endpoint (#2802)Bruce MacDonald
2023-08-25llama : add llama_beam_search() (#2267)Matt Pulver
2023-08-25server : display token probabilities in the UI (#2489)Jhen-Jie Hong
2023-08-23server : allow json array in prompt or content for direct token input (#2306)Xiao-Yong Jin
2023-08-22CUDA: use mul_mat_q kernels by default (#2683)Johannes Gäßler
2023-08-22server : fallback to default if client param is null (#2688)Jhen-Jie Hong
2023-08-21gguf : new file format with flexible meta data (beta) (#2398)Georgi Gerganov
2023-08-15server : add missing /json-schema-to-grammar.mjs (#2616)Jhen-Jie Hong
2023-08-14server : add --numa support (#2524)Cheng Shao
2023-08-12server: fixed wrong variable name in timing json (#2579)Equim
2023-08-10Fix grammar-based sampling issue in server (#2566)Martin Krasser
2023-08-08Allow passing grammar to completion endpoint (#2532)Martin Krasser
2023-08-04Fixing race condition in server and partial stream handling in frontend. (#2391)Stephen Nichols
2023-07-31CUDA: mmq CLI option, fixed mmq build issues (#2453)Johannes Gäßler
2023-07-25server: add rms_norm_eps parameter (#2380)slaren
2023-07-23Add gqa parameter support to the server (#2351)IgnacioFDM
2023-07-15llama : add custom RoPE (#2054)Xiao-Yong Jin
2023-07-13Revert "Support using mmap when applying LoRA (#2095)" (#2206)Howard Su