diff options
author | Pierrick Hymbert <pierrick.hymbert@gmail.com> | 2024-02-25 21:46:29 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-02-25 21:46:29 +0100 |
commit | 8b350356b28f782deab63d8b0e9ae103ceb25fcd (patch) | |
tree | fe8ae94da85a19ebdfc87ed88f53f8a224d56313 /examples/server | |
parent | bf08e00643fd529f748f0a858fd79f3061e3fa18 (diff) |
server: docs - refresh and tease a little bit more the http server (#5718)
* server: docs - refresh and tease a little bit more the http server
* Rephrase README.md server doc
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Update examples/server/README.md
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Update examples/server/README.md
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Update README.md
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Diffstat (limited to 'examples/server')
-rw-r--r-- | examples/server/README.md | 18 |
1 files changed, 15 insertions, 3 deletions
diff --git a/examples/server/README.md b/examples/server/README.md index cb3fd605..0e9bd7fd 100644 --- a/examples/server/README.md +++ b/examples/server/README.md @@ -1,8 +1,20 @@ -# llama.cpp/example/server +# LLaMA.cpp HTTP Server -This example demonstrates a simple HTTP API server and a simple web front end to interact with llama.cpp. +Fast, lightweight, pure C/C++ HTTP server based on [httplib](https://github.com/yhirose/cpp-httplib), [nlohmann::json](https://github.com/nlohmann/json) and **llama.cpp**. -Command line options: +Set of LLM REST APIs and a simple web front end to interact with llama.cpp. + +**Features:** + * LLM inference of F16 and quantum models on GPU and CPU + * [OpenAI API](https://github.com/openai/openai-openapi) compatible chat completions and embeddings routes + * Parallel decoding with multi-user support + * Continuous batching + * Multimodal (wip) + * Monitoring endpoints + +The project is under active development, and we are [looking for feedback and contributors](https://github.com/ggerganov/llama.cpp/issues/4216). + +**Command line options:** - `--threads N`, `-t N`: Set the number of threads to use during generation. - `-tb N, --threads-batch N`: Set the number of threads to use during batch and prompt processing. If not specified, the number of threads will be set to the number of threads used for generation. |