summaryrefslogtreecommitdiff
path: root/examples/server
diff options
context:
space:
mode:
authorKyle Mistele <kyle@mistele.com>2024-01-28 01:55:31 -0600
committerGitHub <noreply@github.com>2024-01-28 09:55:31 +0200
commit39baaf55a160909bb9428bd981014218761a20cb (patch)
treee81a065ecca69182ca0e3f354de08782e23be2d7 /examples/server
parent6db2b41a76ee78d5efdd5c3cddd5d7ad3f646855 (diff)
docker : add server-first container images (#5157)
* feat: add Dockerfiles for each platform that user ./server instead of ./main * feat: update .github/workflows/docker.yml to build server-first docker containers * doc: add information about running the server with Docker to README.md * doc: add information about running with docker to the server README * doc: update n-gpu-layers to show correct GPU usage * fix(doc): update container tag from `server` to `server-cuda` for README example on running server container with CUDA
Diffstat (limited to 'examples/server')
-rw-r--r--examples/server/README.md8
1 files changed, 8 insertions, 0 deletions
diff --git a/examples/server/README.md b/examples/server/README.md
index 1c92a204..dce4ec47 100644
--- a/examples/server/README.md
+++ b/examples/server/README.md
@@ -66,6 +66,14 @@ server.exe -m models\7B\ggml-model.gguf -c 2048
The above command will start a server that by default listens on `127.0.0.1:8080`.
You can consume the endpoints with Postman or NodeJS with axios library. You can visit the web front end at the same url.
+### Docker:
+```bash
+docker run -p 8080:8080 -v /path/to/models:/models ggerganov/llama.cpp:server -m models/7B/ggml-model.gguf -c 512 --host 0.0.0.0 --port 8080
+
+# or, with CUDA:
+docker run -p 8080:8080 -v /path/to/models:/models --gpus all ggerganov/llama.cpp:server-cuda -m models/7B/ggml-model.gguf -c 512 --host 0.0.0.0 --port 8080 --n-gpu-layers 99
+```
+
## Testing with CURL
Using [curl](https://curl.se/). On Windows `curl.exe` should be available in the base OS.