summaryrefslogtreecommitdiff
path: root/examples/server/server.cpp
AgeCommit message (Expand)Author
2024-01-02editorconfig : fix whitespace and indentation #4710Georgi Gerganov
2024-01-02server : add --override-kv parameter (#4710)minarchist
2023-12-30clip : refactor + bug fixes (#4696)Georgi Gerganov
2023-12-29server : replace sleep with condition variables (#4673)Justine Tunney
2023-12-29server : fix OpenAI server sampling w.r.t. penalty. (#4675)SakuraUmi
2023-12-29server : allow to generate multimodal embeddings (#4681)Karthik Sethuraman
2023-12-28Fix OpenAI server sampling w.r.t. temp and seed (#4668)Justine Tunney
2023-12-23server : allow to specify custom prompt for penalty calculation (#3727)Alexey Parfenov
2023-12-17server : disable llm logs if SERVER_VERBOSE is off (#3792)olexiyb
2023-12-17server : fix grammar being ignored (#4494)AdithyanI
2023-12-17server : fix possible ambiguity in content type charset (#4501)Alexey Parfenov
2023-12-17server : allow requests larger than 8K (#4500)mzcu
2023-12-15server : add optional API Key Authentication example (#4441)ShadovvBeast
2023-12-13server : fix handling of characters that span multiple tokens when streaming ...shibe2
2023-12-12server : fix local model name in server (#4420)Vladimir Zorin
2023-12-07llama : per-layer KV cache + quantum K cache (#4309)Georgi Gerganov
2023-12-06server : recognize cache_prompt parameter in OAI API (#4347)Georgi Gerganov
2023-12-03server : fix OpenAI API `stop` field to be optional (#4299)Ed Lee
2023-12-01llama : support optional tensors (#4283)Georgi Gerganov
2023-12-01server : add --log-disable to disable logging to file (#4260)Ziad Ben Hadj-Alouane
2023-12-01server : add single-client multi-prompt support (#4232)Ziad Ben Hadj-Alouane
2023-11-25server : OAI API compatibility (#4198)Georgi Gerganov
2023-11-23Fix incorrect format strings and uninitialized variables. (#4133)Haohui Mai
2023-11-19server : relay error messages (#4131)SoftwareRenderer
2023-11-16Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)Kerfuffle
2023-11-10server : fix crash when prompt exceeds context size (#3996)Alexey Parfenov
2023-11-08server : add min_p param (#3877)Mihai
2023-11-02build : link against build info instead of compiling against it (#3879)cebtenzzre
2023-11-01llama : implement YaRN RoPE scaling (#2268)cebtenzzre
2023-11-01server : re-enable completion and embedded at the same time (#3876)Adrian Hesketh
2023-10-29Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)Kerfuffle
2023-10-26server : do not release slot on image input (#3798)Georgi Gerganov
2023-10-24server : add parameter -tb N, --threads-batch N (#3584) (#3768)cebtenzzre
2023-10-24server : do not block system prompt update (#3767)Georgi Gerganov
2023-10-23llama : remove token functions with `context` args in favor of `model` (#3720)Marcus Dunn
2023-10-22server : parallel decoding and multimodal (#3677)Georgi Gerganov
2023-10-20sampling : refactor init to use llama_sampling_params (#3696)Georgi Gerganov
2023-10-20server : fix uninitialized sampling context (close #3685)Georgi Gerganov
2023-10-18speculative : add tree-based sampling example (#3624)Georgi Gerganov
2023-10-12server : fix kv cache management (#3588)Georgi Gerganov
2023-10-11server : add parameter -tb N, --threads-batch N (#3584)Michael Coppola
2023-10-11common : fix mirostat state when using multiple sequences (#3543)Kerfuffle
2023-10-10infill. : fix tokenization (#3508)vvhg1
2023-10-06server : reuse llama_sample_token common util (#3494)Jhen-Jie Hong
2023-10-05build : use std::make_tuple() for compatibility with older GCC versions (#3488)Kenvix ⭐
2023-10-05server : fix incorrect num_tokens_predicted (#3480)Jhen-Jie Hong
2023-10-03llama : fix session saving/loading (#3400)Georgi Gerganov
2023-10-02infill : add new example + extend server API (#3296)vvhg1
2023-09-28llama.cpp : split llama_context_params into model and context params (#3301)slaren
2023-09-28train : finetune LORA (#2632)xaedes