diff options
author | Pierrick Hymbert <pierrick.hymbert@gmail.com> | 2024-03-02 22:00:14 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-03-02 22:00:14 +0100 |
commit | 9731134296af3a6839cd682e51d9c2109a871de5 (patch) | |
tree | 882db21742d552ee948d1b5db013f02bf35ff8fa /examples/server/tests/features/wrong_usages.feature | |
parent | 4a6e2d6142ab815c964924896891e9ab3e050632 (diff) |
server: tests: passkey challenge / self-extend with context shift demo (#5832)
* server: tests: add models endpoint scenario
* server: /v1/models add some metadata
* server: tests: add debug field in context before scenario
* server: tests: download model from HF, add batch size
* server: tests: add passkey test
* server: tests: add group attention params
* server: do not truncate prompt tokens if self-extend through group attention is enabled
* server: logs: do not truncate log values
* server: tests - passkey - first good working value of nga
* server: tests: fix server timeout
* server: tests: fix passkey, add doc, fix regex content matching, fix timeout
* server: tests: fix regex content matching
* server: tests: schedule slow tests on master
* server: metrics: fix when no prompt processed
* server: tests: self-extend add llama-2-7B and Mixtral-8x7B-v0.1
* server: tests: increase timeout for completion
* server: tests: keep only the PHI-2 test
* server: tests: passkey add a negative test
Diffstat (limited to 'examples/server/tests/features/wrong_usages.feature')
-rw-r--r-- | examples/server/tests/features/wrong_usages.feature | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/examples/server/tests/features/wrong_usages.feature b/examples/server/tests/features/wrong_usages.feature index e228b237..cf14b3b4 100644 --- a/examples/server/tests/features/wrong_usages.feature +++ b/examples/server/tests/features/wrong_usages.feature @@ -1,4 +1,4 @@ -# run with ./test.sh --tags wrong_usage +# run with: ./tests.sh --no-skipped --tags wrong_usage @wrong_usage Feature: Wrong usage of llama.cpp server @@ -7,7 +7,7 @@ Feature: Wrong usage of llama.cpp server # or pass n_predict/max_tokens in the request. Scenario: Infinite loop Given a server listening on localhost:8080 - And a model file stories260K.gguf + And a model file tinyllamas/stories260K.gguf from HF repo ggml-org/models # Uncomment below to fix the issue #And 64 server max tokens to predict Then the server is starting @@ -18,4 +18,5 @@ Feature: Wrong usage of llama.cpp server # Uncomment below to fix the issue #And 128 max tokens to predict Given concurrent completion requests + Then the server is idle Then all prompts are predicted |