summaryrefslogtreecommitdiff
path: root/examples/server/README.md
diff options
context:
space:
mode:
authorXuan Son Nguyen <thichthat@gmail.com>2024-03-25 09:42:17 +0100
committerGitHub <noreply@github.com>2024-03-25 09:42:17 +0100
commitad3a0505e3b6cd777259ee35e61d428357ffc565 (patch)
treeae3976c33914df984df4f0b0ae5445422a0dd30d /examples/server/README.md
parent95ad616cddda50273e955bfe192328acd9aa4896 (diff)
Server: clean up OAI params parsing function (#6284)
* server: clean up oai parsing function * fix response_format * fix empty response_format * minor fixes * add TODO for logprobs * update docs
Diffstat (limited to 'examples/server/README.md')
-rw-r--r--examples/server/README.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/examples/server/README.md b/examples/server/README.md
index dfea2b90..49121a46 100644
--- a/examples/server/README.md
+++ b/examples/server/README.md
@@ -360,7 +360,7 @@ Notice that each `probs` is an array of length `n_probs`.
- `default_generation_settings` - the default generation settings for the `/completion` endpoint, has the same fields as the `generation_settings` response object from the `/completion` endpoint.
- `total_slots` - the total number of slots for process requests (defined by `--parallel` option)
-- **POST** `/v1/chat/completions`: OpenAI-compatible Chat Completions API. Given a ChatML-formatted json description in `messages`, it returns the predicted completion. Both synchronous and streaming mode are supported, so scripted and interactive applications work fine. While no strong claims of compatibility with OpenAI API spec is being made, in our experience it suffices to support many apps. Only ChatML-tuned models, such as Dolphin, OpenOrca, OpenHermes, OpenChat-3.5, etc can be used with this endpoint.
+- **POST** `/v1/chat/completions`: OpenAI-compatible Chat Completions API. Given a ChatML-formatted json description in `messages`, it returns the predicted completion. Both synchronous and streaming mode are supported, so scripted and interactive applications work fine. While no strong claims of compatibility with OpenAI API spec is being made, in our experience it suffices to support many apps. Only model with [supported chat template](https://github.com/ggerganov/llama.cpp/wiki/Templates-supported-by-llama_chat_apply_template) can be used optimally with this endpoint. By default, ChatML template will be used.
*Options:*