summaryrefslogtreecommitdiff
path: root/examples/server
AgeCommit message (Expand)Author
2024-05-28server: do not remove whitespace at the start of a completion chunk (#7524)mgroeber9110
2024-05-28Markdownish code block fix (#7571)Nathan Epstein
2024-05-26SimpleChat Completion Mode flexibility and cleanup, Settings gMe, Optional sl...HanishKVC
2024-05-23SimpleChat: a simple and dumb web front end for testing /chat/completions and...HanishKVC
2024-05-22common : normalize naming style (#7462)Georgi Gerganov
2024-05-21Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) (#7425)jaime-m-p
2024-05-20Tokenizer SPM fixes for phi-3 and llama-spm (#7375)jaime-m-p
2024-05-20server : fix temperature + disable some tests (#7409)Georgi Gerganov
2024-05-20server : tuning tests (#7388)Georgi Gerganov
2024-05-20server : return error on too large embedding input (#7389)Georgi Gerganov
2024-05-19server: add test for token probs (#7347)Johannes Gäßler
2024-05-19server: fix seed being reported back (#7382)Johannes Gäßler
2024-05-18server: correct --threads documentation [no ci] (#7362)Johannes Gäßler
2024-05-17server : add support for the RPC backend (#7305)Radoslav Gerganov
2024-05-17[Server] Added --verbose option to README [no ci] (#7335)Leon Knauer
2024-05-16Revert "server bench: fix bench not waiting for model load (#7284)" (#7334)Pierrick Hymbert
2024-05-15server bench: fix bench not waiting for model load (#7284)Johannes Gäßler
2024-05-14server: free sampling contexts on exit (#7264)Steve Grubb
2024-05-14docs: Fix typo and update description for --embeddings flag (#7026)Ryuei
2024-05-13change default temperature of OAI compat API from 0 to 1 (#7226)Benjamin Findley
2024-05-11fix system prompt handling (#7153)Xuan Son Nguyen
2024-05-11server : free llama_batch on exit (#7212)Steve Grubb
2024-05-11server: fix reported top tokens for temperature 0 (#7203)Johannes Gäßler
2024-05-08convert-hf : save memory with lazy evaluation (#7075)compilade
2024-05-08JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143)Johannes Gäßler
2024-05-08server : add themes + favicon (#6848)JohnnyB
2024-05-08server : add_special option for tokenize endpoint (#7059)Johan
2024-05-08clean up json_value & server_log (#7142)Xuan Son Nguyen
2024-05-07server: fix incorrectly reported token probabilities (#7125)Johannes Gäßler
2024-05-07server : update readme with undocumented options (#7013)Kyle Mistele
2024-05-04If first token generated from the server is the stop word the server will cra...maor-ps
2024-05-01Server: add tests for batch size, different seeds (#6950)Johannes Gäßler
2024-04-30ggml : add Flash Attention (#5021)Georgi Gerganov
2024-04-30Improve usability of --model-url & related flags (#6930)Olivier Chafik
2024-04-29build(cmake): simplify instructions (`cmake -B build && cmake --build build ....Olivier Chafik
2024-04-27ci: server: tests python env on github container ubuntu latest / fix n_predic...Pierrick Hymbert
2024-04-26quantize: add imatrix and dataset metadata in GGUF (#6658)Pierrick Hymbert
2024-04-26server: stop generation at `n_ctx_train` if `n_predict` is not set (#6638)Pierrick Hymbert
2024-04-26bench: server add stop word for PHI-2 (#6916)Pierrick Hymbert
2024-04-25tests : minor bash stuff (#6902)Georgi Gerganov
2024-04-24server : do not apply Markdown formatting in code sections (#6850)mgroeber9110
2024-04-24common : revert showing control tokens by default for server (#6860)Kyle Mistele
2024-04-24Server: fix seed for multiple slots (#6835)Johannes Gäßler
2024-04-21`build`: generate hex dump of server assets during build (#6661)Olivier Chafik
2024-04-21llama : support Llama 3 HF conversion (#6745)Pedro Cuenca
2024-04-20doc : server tests require llama to be built with curl enabled (#6788)Jan Boon
2024-04-19server: static: upstream upgrade (#6765)Pierrick Hymbert
2024-04-15`main`: add --json-schema / -j flag (#6659)Olivier Chafik
2024-04-15server : revert "minor layout improvements" (#6684)Pierrick Hymbert
2024-04-12JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings,...Olivier Chafik