From 39baaf55a160909bb9428bd981014218761a20cb Mon Sep 17 00:00:00 2001 From: Kyle Mistele Date: Sun, 28 Jan 2024 01:55:31 -0600 Subject: docker : add server-first container images (#5157) * feat: add Dockerfiles for each platform that user ./server instead of ./main * feat: update .github/workflows/docker.yml to build server-first docker containers * doc: add information about running the server with Docker to README.md * doc: add information about running with docker to the server README * doc: update n-gpu-layers to show correct GPU usage * fix(doc): update container tag from `server` to `server-cuda` for README example on running server container with CUDA --- examples/server/README.md | 8 ++++++++ 1 file changed, 8 insertions(+) (limited to 'examples/server') diff --git a/examples/server/README.md b/examples/server/README.md index 1c92a204..dce4ec47 100644 --- a/examples/server/README.md +++ b/examples/server/README.md @@ -66,6 +66,14 @@ server.exe -m models\7B\ggml-model.gguf -c 2048 The above command will start a server that by default listens on `127.0.0.1:8080`. You can consume the endpoints with Postman or NodeJS with axios library. You can visit the web front end at the same url. +### Docker: +```bash +docker run -p 8080:8080 -v /path/to/models:/models ggerganov/llama.cpp:server -m models/7B/ggml-model.gguf -c 512 --host 0.0.0.0 --port 8080 + +# or, with CUDA: +docker run -p 8080:8080 -v /path/to/models:/models --gpus all ggerganov/llama.cpp:server-cuda -m models/7B/ggml-model.gguf -c 512 --host 0.0.0.0 --port 8080 --n-gpu-layers 99 +``` + ## Testing with CURL Using [curl](https://curl.se/). On Windows `curl.exe` should be available in the base OS. -- cgit v1.2.3