summaryrefslogtreecommitdiff
path: root/examples/server
AgeCommit message (Expand)Author
2024-08-12Merge mainline - Aug 12 2024 (#17)Kawrakow
2024-07-27Merge mainline llama.cpp (#3)Kawrakow
2024-06-20server : fix smart slot selection (#8020)sasha0552
2024-06-18Only use FIM middle token if it exists (#7648)Sigbjørn Skjæret
2024-06-13`build`: rename main → llama-cli, server → llama-server, llava-cli → ll...Olivier Chafik
2024-06-12server : restore numeric prompts (#7883)Georgi Gerganov
2024-06-11json: refine constraint for whitespace to avoid runaways yet allow pretty pri...Olivier Chafik
2024-06-11`json`: document schema conversion in GBNF readme, align manual grammar examp...Olivier Chafik
2024-06-10server : improve "prompt" handling (#7847)Georgi Gerganov
2024-06-09server: do not remove whitespace at the start of a completion chunk (#7830)mgroeber9110
2024-06-08server : smart slot selection using Longest Common Prefix (#7728)sasha0552
2024-06-07server: update cache_prompt documentation [no ci] (#7745)Johannes Gäßler
2024-06-07server : do not get prompt in infill mode (#7286)woodx
2024-06-06imatrix : migrate to gpt_params (#7771)Georgi Gerganov
2024-06-06grammars: x{min,max} repetition operator (#6640)Olivier Chafik
2024-06-04common : refactor cli arg parsing (#7675)Georgi Gerganov
2024-06-01server : new UI (#7633)Yazan Agha-Schrader
2024-06-02SimpleChat: Simple histogram/repeatMatching driven garbageTrimming, Settings ...HanishKVC
2024-05-31server : update js (#7670)Georgi Gerganov
2024-05-28server: do not remove whitespace at the start of a completion chunk (#7524)mgroeber9110
2024-05-28Markdownish code block fix (#7571)Nathan Epstein
2024-05-26SimpleChat Completion Mode flexibility and cleanup, Settings gMe, Optional sl...HanishKVC
2024-05-23SimpleChat: a simple and dumb web front end for testing /chat/completions and...HanishKVC
2024-05-22common : normalize naming style (#7462)Georgi Gerganov
2024-05-21Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) (#7425)jaime-m-p
2024-05-20Tokenizer SPM fixes for phi-3 and llama-spm (#7375)jaime-m-p
2024-05-20server : fix temperature + disable some tests (#7409)Georgi Gerganov
2024-05-20server : tuning tests (#7388)Georgi Gerganov
2024-05-20server : return error on too large embedding input (#7389)Georgi Gerganov
2024-05-19server: add test for token probs (#7347)Johannes Gäßler
2024-05-19server: fix seed being reported back (#7382)Johannes Gäßler
2024-05-18server: correct --threads documentation [no ci] (#7362)Johannes Gäßler
2024-05-17server : add support for the RPC backend (#7305)Radoslav Gerganov
2024-05-17[Server] Added --verbose option to README [no ci] (#7335)Leon Knauer
2024-05-16Revert "server bench: fix bench not waiting for model load (#7284)" (#7334)Pierrick Hymbert
2024-05-15server bench: fix bench not waiting for model load (#7284)Johannes Gäßler
2024-05-14server: free sampling contexts on exit (#7264)Steve Grubb
2024-05-14docs: Fix typo and update description for --embeddings flag (#7026)Ryuei
2024-05-13change default temperature of OAI compat API from 0 to 1 (#7226)Benjamin Findley
2024-05-11fix system prompt handling (#7153)Xuan Son Nguyen
2024-05-11server : free llama_batch on exit (#7212)Steve Grubb
2024-05-11server: fix reported top tokens for temperature 0 (#7203)Johannes Gäßler
2024-05-08convert-hf : save memory with lazy evaluation (#7075)compilade
2024-05-08JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143)Johannes Gäßler
2024-05-08server : add themes + favicon (#6848)JohnnyB
2024-05-08server : add_special option for tokenize endpoint (#7059)Johan
2024-05-08clean up json_value & server_log (#7142)Xuan Son Nguyen
2024-05-07server: fix incorrectly reported token probabilities (#7125)Johannes Gäßler
2024-05-07server : update readme with undocumented options (#7013)Kyle Mistele
2024-05-04If first token generated from the server is the stop word the server will cra...maor-ps