summaryrefslogtreecommitdiff
path: root/examples/server
diff options
context:
space:
mode:
authorOlivier Chafik <ochafik@users.noreply.github.com>2024-06-13 00:41:52 +0100
committerGitHub <noreply@github.com>2024-06-13 00:41:52 +0100
commit1c641e6aac5c18b964e7b32d9dbbb4bf5301d0d7 (patch)
tree616348dac8e67d80a03a81847ce9ee4bb7e19d49 /examples/server
parent963552903f51043ee947a8deeaaa7ec00bc3f1a4 (diff)
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
* `main`/`server`: rename to `llama` / `llama-server` for consistency w/ homebrew * server: update refs -> llama-server gitignore llama-server * server: simplify nix package * main: update refs -> llama fix examples/main ref * main/server: fix targets * update more names * Update build.yml * rm accidentally checked in bins * update straggling refs * Update .gitignore * Update server-llm.sh * main: target name -> llama-cli * Prefix all example bins w/ llama- * fix main refs * rename {main->llama}-cmake-pkg binary * prefix more cmake targets w/ llama- * add/fix gbnf-validator subfolder to cmake * sort cmake example subdirs * rm bin files * fix llama-lookup-* Makefile rules * gitignore /llama-* * rename Dockerfiles * rename llama|main -> llama-cli; consistent RPM bin prefixes * fix some missing -cli suffixes * rename dockerfile w/ llama-cli * rename(make): llama-baby-llama * update dockerfile refs * more llama-cli(.exe) * fix test-eval-callback * rename: llama-cli-cmake-pkg(.exe) * address gbnf-validator unused fread warning (switched to C++ / ifstream) * add two missing llama- prefixes * Updating docs for eval-callback binary to use new `llama-` prefix. * Updating a few lingering doc references for rename of main to llama-cli * Updating `run-with-preset.py` to use new binary names. Updating docs around `perplexity` binary rename. * Updating documentation references for lookup-merge and export-lora * Updating two small `main` references missed earlier in the finetune docs. * Update apps.nix * update grammar/README.md w/ new llama-* names * update llama-rpc-server bin name + doc * Revert "update llama-rpc-server bin name + doc" This reverts commit e474ef1df481fd8936cd7d098e3065d7de378930. * add hot topic notice to README.md * Update README.md * Update README.md * rename gguf-split & quantize bins refs in **/tests.sh --------- Co-authored-by: HanClinto <hanclinto@gmail.com>
Diffstat (limited to 'examples/server')
-rw-r--r--examples/server/CMakeLists.txt2
-rw-r--r--examples/server/README.md22
-rw-r--r--examples/server/bench/README.md2
-rw-r--r--examples/server/bench/bench.py2
-rw-r--r--examples/server/public_simplechat/readme.md4
-rw-r--r--examples/server/tests/README.md8
-rw-r--r--examples/server/tests/features/steps/steps.py4
7 files changed, 21 insertions, 23 deletions
diff --git a/examples/server/CMakeLists.txt b/examples/server/CMakeLists.txt
index dab70961..8365f951 100644
--- a/examples/server/CMakeLists.txt
+++ b/examples/server/CMakeLists.txt
@@ -1,4 +1,4 @@
-set(TARGET server)
+set(TARGET llama-server)
option(LLAMA_SERVER_VERBOSE "Build verbose logging option for Server" ON)
option(LLAMA_SERVER_SSL "Build SSL support for the server" OFF)
include_directories(${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_BINARY_DIR})
diff --git a/examples/server/README.md b/examples/server/README.md
index ccbdcdbd..e7fb0bf6 100644
--- a/examples/server/README.md
+++ b/examples/server/README.md
@@ -80,26 +80,26 @@ The project is under active development, and we are [looking for feedback and co
## Build
-`server` is built alongside everything else from the root of the project
+`llama-server` is built alongside everything else from the root of the project
- Using `make`:
```bash
- make server
+ make llama-server
```
- Using `CMake`:
```bash
cmake -B build
- cmake --build build --config Release -t server
+ cmake --build build --config Release -t llama-server
```
- Binary is at `./build/bin/server`
+ Binary is at `./build/bin/llama-server`
## Build with SSL
-`server` can also be built with SSL support using OpenSSL 3
+`llama-server` can also be built with SSL support using OpenSSL 3
- Using `make`:
@@ -107,14 +107,14 @@ The project is under active development, and we are [looking for feedback and co
# NOTE: For non-system openssl, use the following:
# CXXFLAGS="-I /path/to/openssl/include"
# LDFLAGS="-L /path/to/openssl/lib"
- make LLAMA_SERVER_SSL=true server
+ make LLAMA_SERVER_SSL=true llama-server
```
- Using `CMake`:
```bash
cmake -B build -DLLAMA_SERVER_SSL=ON
- cmake --build build --config Release -t server
+ cmake --build build --config Release -t llama-server
```
## Quick Start
@@ -124,13 +124,13 @@ To get started right away, run the following command, making sure to use the cor
### Unix-based systems (Linux, macOS, etc.)
```bash
-./server -m models/7B/ggml-model.gguf -c 2048
+./llama-server -m models/7B/ggml-model.gguf -c 2048
```
### Windows
```powershell
-server.exe -m models\7B\ggml-model.gguf -c 2048
+llama-server.exe -m models\7B\ggml-model.gguf -c 2048
```
The above command will start a server that by default listens on `127.0.0.1:8080`.
@@ -629,11 +629,11 @@ bash chat.sh
### OAI-like API
-The HTTP `server` supports an OAI-like API: https://github.com/openai/openai-openapi
+The HTTP `llama-server` supports an OAI-like API: https://github.com/openai/openai-openapi
### API errors
-`server` returns errors in the same format as OAI: https://github.com/openai/openai-openapi
+`llama-server` returns errors in the same format as OAI: https://github.com/openai/openai-openapi
Example of an error:
diff --git a/examples/server/bench/README.md b/examples/server/bench/README.md
index 23a3ec97..0f18ca39 100644
--- a/examples/server/bench/README.md
+++ b/examples/server/bench/README.md
@@ -99,7 +99,7 @@ The `bench.py` script does several steps:
It aims to be used in the CI, but you can run it manually:
```shell
-LLAMA_SERVER_BIN_PATH=../../../cmake-build-release/bin/server python bench.py \
+LLAMA_SERVER_BIN_PATH=../../../cmake-build-release/bin/llama-server python bench.py \
--runner-label local \
--name local \
--branch `git rev-parse --abbrev-ref HEAD` \
diff --git a/examples/server/bench/bench.py b/examples/server/bench/bench.py
index 86c5de10..4fbbb203 100644
--- a/examples/server/bench/bench.py
+++ b/examples/server/bench/bench.py
@@ -245,7 +245,7 @@ def start_server(args):
def start_server_background(args):
# Start the server
- server_path = '../../../build/bin/server'
+ server_path = '../../../build/bin/llama-server'
if 'LLAMA_SERVER_BIN_PATH' in os.environ:
server_path = os.environ['LLAMA_SERVER_BIN_PATH']
server_args = [
diff --git a/examples/server/public_simplechat/readme.md b/examples/server/public_simplechat/readme.md
index 36a46885..2dc17782 100644
--- a/examples/server/public_simplechat/readme.md
+++ b/examples/server/public_simplechat/readme.md
@@ -44,12 +44,12 @@ http module.
### running using examples/server
-bin/server -m path/model.gguf --path ../examples/server/public_simplechat [--port PORT]
+./llama-server -m path/model.gguf --path examples/server/public_simplechat [--port PORT]
### running using python3's server module
first run examples/server
-* bin/server -m path/model.gguf
+* ./llama-server -m path/model.gguf
next run this web front end in examples/server/public_simplechat
* cd ../examples/server/public_simplechat
diff --git a/examples/server/tests/README.md b/examples/server/tests/README.md
index 83c0208f..5e6cb277 100644
--- a/examples/server/tests/README.md
+++ b/examples/server/tests/README.md
@@ -27,10 +27,8 @@ To mitigate it, you can increase values in `n_predict`, `kv_size`.
```shell
cd ../../..
-mkdir build
-cd build
-cmake -DLLAMA_CURL=ON ../
-cmake --build . --target server
+cmake -B build -DLLAMA_CURL=ON
+cmake --build build --target llama-server
```
2. Start the test: `./tests.sh`
@@ -40,7 +38,7 @@ It's possible to override some scenario steps values with environment variables:
| variable | description |
|--------------------------|------------------------------------------------------------------------------------------------|
| `PORT` | `context.server_port` to set the listening port of the server during scenario, default: `8080` |
-| `LLAMA_SERVER_BIN_PATH` | to change the server binary path, default: `../../../build/bin/server` |
+| `LLAMA_SERVER_BIN_PATH` | to change the server binary path, default: `../../../build/bin/llama-server` |
| `DEBUG` | "ON" to enable steps and server verbose mode `--verbose` |
| `SERVER_LOG_FORMAT_JSON` | if set switch server logs to json format |
| `N_GPU_LAYERS` | number of model layers to offload to VRAM `-ngl --n-gpu-layers` |
diff --git a/examples/server/tests/features/steps/steps.py b/examples/server/tests/features/steps/steps.py
index 26d9359d..7b5dabb0 100644
--- a/examples/server/tests/features/steps/steps.py
+++ b/examples/server/tests/features/steps/steps.py
@@ -1272,9 +1272,9 @@ def context_text(context):
def start_server_background(context):
if os.name == 'nt':
- context.server_path = '../../../build/bin/Release/server.exe'
+ context.server_path = '../../../build/bin/Release/llama-server.exe'
else:
- context.server_path = '../../../build/bin/server'
+ context.server_path = '../../../build/bin/llama-server'
if 'LLAMA_SERVER_BIN_PATH' in os.environ:
context.server_path = os.environ['LLAMA_SERVER_BIN_PATH']
server_listen_addr = context.server_fqdn