diff options
author | Radoslav Gerganov <rgerganov@gmail.com> | 2024-05-14 14:27:19 +0300 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-05-14 14:27:19 +0300 |
commit | 5e31828d3e35c76ecfee665bc23771a4bec1d130 (patch) | |
tree | 7f5f2edc7c3fc3e7655904316897e32202edd5d6 /examples/rpc/README.md | |
parent | 541600201e6480f54ae09e58d16b154d4b4b331d (diff) |
ggml : add RPC backend (#6829)
* ggml : add RPC backend
The RPC backend proxies all operations to a remote server which runs a
regular backend (CPU, CUDA, Metal, etc).
* set TCP_NODELAY
* add CI workflows
* Address review comments
* fix warning
* implement llama_max_devices() for RPC
* Address review comments
* Address review comments
* wrap sockfd into a struct
* implement get_alignment and get_max_size
* add get_device_memory
* fix warning
* win32 support
* add README
* readme : trim trailing whitespace
* Address review comments
* win32 fix
* Address review comments
* fix compile warnings on macos
Diffstat (limited to 'examples/rpc/README.md')
-rw-r--r-- | examples/rpc/README.md | 74 |
1 files changed, 74 insertions, 0 deletions
diff --git a/examples/rpc/README.md b/examples/rpc/README.md new file mode 100644 index 00000000..325d0abc --- /dev/null +++ b/examples/rpc/README.md @@ -0,0 +1,74 @@ +## Overview + +The `rpc-server` allows running `ggml` backend on a remote host. +The RPC backend communicates with one or several instances of `rpc-server` and offloads computations to them. +This can be used for distributed LLM inference with `llama.cpp` in the following way: + +```mermaid +flowchart TD + rpcb---|TCP|srva + rpcb---|TCP|srvb + rpcb-.-|TCP|srvn + subgraph hostn[Host N] + srvn[rpc-server]-.-backend3["Backend (CUDA,Metal,etc.)"] + end + subgraph hostb[Host B] + srvb[rpc-server]---backend2["Backend (CUDA,Metal,etc.)"] + end + subgraph hosta[Host A] + srva[rpc-server]---backend["Backend (CUDA,Metal,etc.)"] + end + subgraph host[Main Host] + ggml[llama.cpp]---rpcb[RPC backend] + end + style hostn stroke:#66,stroke-width:2px,stroke-dasharray: 5 5 +``` + +Each host can run a different backend, e.g. one with CUDA and another with Metal. +You can also run multiple `rpc-server` instances on the same host, each with a different backend. + +## Usage + +On each host, build the corresponding backend with `cmake` and add `-DLLAMA_RPC=ON` to the build options. +For example, to build the CUDA backend with RPC support: + +```bash +mkdir build-rpc-cuda +cd build-rpc-cuda +cmake .. -DLLAMA_CUDA=ON -DLLAMA_RPC=ON +cmake --build . --config Release +``` + +Then, start the `rpc-server` with the backend: + +```bash +$ bin/rpc-server 0.0.0.0 50052 +create_backend: using CUDA backend +ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no +ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes +ggml_cuda_init: found 1 CUDA devices: + Device 0: NVIDIA T1200 Laptop GPU, compute capability 7.5, VMM: yes +Starting RPC server on 0.0.0.0:50052 +``` + +When using the CUDA backend, you can specify the device with the `CUDA_VISIBLE_DEVICES` environment variable, e.g.: +```bash +$ CUDA_VISIBLE_DEVICES=0 bin/rpc-server 0.0.0.0 50052 +``` +This way you can run multiple `rpc-server` instances on the same host, each with a different CUDA device. + + +On the main host build `llama.cpp` only with `-DLLAMA_RPC=ON`: + +```bash +mkdir build-rpc +cd build-rpc +cmake .. -DLLAMA_RPC=ON +cmake --build . --config Release +``` + +Finally, use the `--rpc` option to specify the host and port of each `rpc-server`: + +```bash +$ bin/main -m ../models/tinyllama-1b/ggml-model-f16.gguf -p "Hello, my name is" --repeat-penalty 1.0 -n 64 --rpc 192.168.88.10:50052,192.168.88.11:50052 -ngl 99 +``` |