summaryrefslogtreecommitdiff
path: root/examples/server/tests/features/issues.feature
diff options
context:
space:
mode:
authorPierrick Hymbert <pierrick.hymbert@gmail.com>2024-02-24 12:28:55 +0100
committerGitHub <noreply@github.com>2024-02-24 12:28:55 +0100
commit525213d2f5da1eaf4b922b6b792cb52b2c613368 (patch)
tree8400e8a97d231b13a2df0c9d8b7c8fa945d24d5e /examples/server/tests/features/issues.feature
parentfd43d66f46ee3b5345fb8a74a252d86ccd34a409 (diff)
server: init functional tests (#5566)
* server: tests: init scenarios - health and slots endpoints - completion endpoint - OAI compatible chat completion requests w/ and without streaming - completion multi users scenario - multi users scenario on OAI compatible endpoint with streaming - multi users with total number of tokens to predict exceeds the KV Cache size - server wrong usage scenario, like in Infinite loop of "context shift" #3969 - slots shifting - continuous batching - embeddings endpoint - multi users embedding endpoint: Segmentation fault #5655 - OpenAI-compatible embeddings API - tokenize endpoint - CORS and api key scenario * server: CI GitHub workflow --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Diffstat (limited to 'examples/server/tests/features/issues.feature')
-rw-r--r--examples/server/tests/features/issues.feature36
1 files changed, 36 insertions, 0 deletions
diff --git a/examples/server/tests/features/issues.feature b/examples/server/tests/features/issues.feature
new file mode 100644
index 00000000..542006d9
--- /dev/null
+++ b/examples/server/tests/features/issues.feature
@@ -0,0 +1,36 @@
+# List of ongoing issues
+@bug
+Feature: Issues
+ # Issue #5655
+ Scenario: Multi users embeddings
+ Given a server listening on localhost:8080
+ And a model file stories260K.gguf
+ And a model alias tinyllama-2
+ And 42 as server seed
+ And 64 KV cache size
+ And 2 slots
+ And continuous batching
+ And embeddings extraction
+ Then the server is starting
+ Then the server is healthy
+
+ Given a prompt:
+ """
+ Write a very long story about AI.
+ """
+ And a prompt:
+ """
+ Write another very long music lyrics.
+ """
+ And a prompt:
+ """
+ Write a very long poem.
+ """
+ And a prompt:
+ """
+ Write a very long joke.
+ """
+ Given concurrent embedding requests
+ Then the server is busy
+ Then the server is idle
+ Then all embeddings are generated