summaryrefslogtreecommitdiff
path: root/examples/server
AgeCommit message (Expand)Author
2023-12-03server : fix OpenAI API `stop` field to be optional (#4299)Ed Lee
2023-12-03py : add grammar to oai like api (#4294)Rickard Edén
2023-12-01llama : support optional tensors (#4283)Georgi Gerganov
2023-12-01server : add --log-disable to disable logging to file (#4260)Ziad Ben Hadj-Alouane
2023-12-01server : add single-client multi-prompt support (#4232)Ziad Ben Hadj-Alouane
2023-11-30py : fix oai proxy (#3972)rhjdvsgsgks
2023-11-25server : OAI API compatibility (#4198)Georgi Gerganov
2023-11-23Fix incorrect format strings and uninitialized variables. (#4133)Haohui Mai
2023-11-19server : relay error messages (#4131)SoftwareRenderer
2023-11-16Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)Kerfuffle
2023-11-10server : fix crash when prompt exceeds context size (#3996)Alexey Parfenov
2023-11-10server : allow continue edit on completion mode (#3950)Jhen-Jie Hong
2023-11-08server : add min_p param (#3877)Mihai
2023-11-07llava : expose as a shared library for downstream projects (#3613)Damian Stewart
2023-11-05server : fix typo for --alias shortcut from -m to -a (#3958)Thái Hoàng Tâm
2023-11-02build : link against build info instead of compiling against it (#3879)cebtenzzre
2023-11-01llama : implement YaRN RoPE scaling (#2268)cebtenzzre
2023-11-01server : re-enable completion and embedded at the same time (#3876)Adrian Hesketh
2023-10-29Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)Kerfuffle
2023-10-26server : do not release slot on image input (#3798)Georgi Gerganov
2023-10-24server : add parameter -tb N, --threads-batch N (#3584) (#3768)cebtenzzre
2023-10-24server : do not block system prompt update (#3767)Georgi Gerganov
2023-10-23llama : remove token functions with `context` args in favor of `model` (#3720)Marcus Dunn
2023-10-22server : parallel decoding and multimodal (#3677)Georgi Gerganov
2023-10-20sampling : refactor init to use llama_sampling_params (#3696)Georgi Gerganov
2023-10-20server : fix uninitialized sampling context (close #3685)Georgi Gerganov
2023-10-18speculative : add tree-based sampling example (#3624)Georgi Gerganov
2023-10-17editorconfig : remove trailing spacesGeorgi Gerganov
2023-10-17server : documentation of JSON return value of /completion endpoint (#3632)coezbek
2023-10-12server : add completion mode (no chat) (#3582)Aarni Koskela
2023-10-12server : fix kv cache management (#3588)Georgi Gerganov
2023-10-11server : add parameter -tb N, --threads-batch N (#3584)Michael Coppola
2023-10-11common : fix mirostat state when using multiple sequences (#3543)Kerfuffle
2023-10-10infill. : fix tokenization (#3508)vvhg1
2023-10-08api_like_OAI.py : compat with Microsoft Guidance (#2746)Ryder Wishart
2023-10-08api_like_OAI.py : simplify function (#2796)arcrank
2023-10-06server : docs fix default values and add n_probs (#3506)Mihai
2023-10-06server : reuse llama_sample_token common util (#3494)Jhen-Jie Hong
2023-10-05build : use std::make_tuple() for compatibility with older GCC versions (#3488)Kenvix ⭐
2023-10-05server : fix incorrect num_tokens_predicted (#3480)Jhen-Jie Hong
2023-10-03llama : fix session saving/loading (#3400)Georgi Gerganov
2023-10-02infill : add new example + extend server API (#3296)vvhg1
2023-09-28llama.cpp : split llama_context_params into model and context params (#3301)slaren
2023-09-28train : finetune LORA (#2632)xaedes
2023-09-28llama : custom attention mask + parallel decoding + no context swaps (#3228)Georgi Gerganov
2023-09-20llama : allow gguf RoPE keys to be overridden with defaults (#3240)Cebtenzzre
2023-09-15check C++ code with -Wmissing-declarations (#3184)Cebtenzzre
2023-09-07fix some warnings from gcc and clang-tidy (#3038)Cebtenzzre
2023-09-05examples : replace fprintf to stdout with printf (#3017)Cebtenzzre
2023-09-04server : add a subtle loading animation to the edit box (#2466)Aarni Koskela