diff options
author | Pierrick Hymbert <pierrick.hymbert@gmail.com> | 2024-02-24 12:28:55 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-02-24 12:28:55 +0100 |
commit | 525213d2f5da1eaf4b922b6b792cb52b2c613368 (patch) | |
tree | 8400e8a97d231b13a2df0c9d8b7c8fa945d24d5e /examples/server/tests/features/wrong_usages.feature | |
parent | fd43d66f46ee3b5345fb8a74a252d86ccd34a409 (diff) |
server: init functional tests (#5566)
* server: tests: init scenarios
- health and slots endpoints
- completion endpoint
- OAI compatible chat completion requests w/ and without streaming
- completion multi users scenario
- multi users scenario on OAI compatible endpoint with streaming
- multi users with total number of tokens to predict exceeds the KV Cache size
- server wrong usage scenario, like in Infinite loop of "context shift" #3969
- slots shifting
- continuous batching
- embeddings endpoint
- multi users embedding endpoint: Segmentation fault #5655
- OpenAI-compatible embeddings API
- tokenize endpoint
- CORS and api key scenario
* server: CI GitHub workflow
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Diffstat (limited to 'examples/server/tests/features/wrong_usages.feature')
-rw-r--r-- | examples/server/tests/features/wrong_usages.feature | 21 |
1 files changed, 21 insertions, 0 deletions
diff --git a/examples/server/tests/features/wrong_usages.feature b/examples/server/tests/features/wrong_usages.feature new file mode 100644 index 00000000..e228b237 --- /dev/null +++ b/examples/server/tests/features/wrong_usages.feature @@ -0,0 +1,21 @@ +# run with ./test.sh --tags wrong_usage +@wrong_usage +Feature: Wrong usage of llama.cpp server + + #3969 The user must always set --n-predict option + # to cap the number of tokens any completion request can generate + # or pass n_predict/max_tokens in the request. + Scenario: Infinite loop + Given a server listening on localhost:8080 + And a model file stories260K.gguf + # Uncomment below to fix the issue + #And 64 server max tokens to predict + Then the server is starting + Given a prompt: + """ + Go to: infinite loop + """ + # Uncomment below to fix the issue + #And 128 max tokens to predict + Given concurrent completion requests + Then all prompts are predicted |