summaryrefslogtreecommitdiff
path: root/examples/server/tests/features/security.feature
diff options
context:
space:
mode:
authorPierrick Hymbert <pierrick.hymbert@gmail.com>2024-02-24 12:28:55 +0100
committerGitHub <noreply@github.com>2024-02-24 12:28:55 +0100
commit525213d2f5da1eaf4b922b6b792cb52b2c613368 (patch)
tree8400e8a97d231b13a2df0c9d8b7c8fa945d24d5e /examples/server/tests/features/security.feature
parentfd43d66f46ee3b5345fb8a74a252d86ccd34a409 (diff)
server: init functional tests (#5566)
* server: tests: init scenarios - health and slots endpoints - completion endpoint - OAI compatible chat completion requests w/ and without streaming - completion multi users scenario - multi users scenario on OAI compatible endpoint with streaming - multi users with total number of tokens to predict exceeds the KV Cache size - server wrong usage scenario, like in Infinite loop of "context shift" #3969 - slots shifting - continuous batching - embeddings endpoint - multi users embedding endpoint: Segmentation fault #5655 - OpenAI-compatible embeddings API - tokenize endpoint - CORS and api key scenario * server: CI GitHub workflow --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Diffstat (limited to 'examples/server/tests/features/security.feature')
-rw-r--r--examples/server/tests/features/security.feature50
1 files changed, 50 insertions, 0 deletions
diff --git a/examples/server/tests/features/security.feature b/examples/server/tests/features/security.feature
new file mode 100644
index 00000000..db06d397
--- /dev/null
+++ b/examples/server/tests/features/security.feature
@@ -0,0 +1,50 @@
+@llama.cpp
+Feature: Security
+
+ Background: Server startup with an api key defined
+ Given a server listening on localhost:8080
+ And a model file stories260K.gguf
+ And a server api key llama.cpp
+ Then the server is starting
+ Then the server is healthy
+
+ Scenario Outline: Completion with some user api key
+ Given a prompt test
+ And a user api key <api_key>
+ And 4 max tokens to predict
+ And a completion request with <api_error> api error
+
+ Examples: Prompts
+ | api_key | api_error |
+ | llama.cpp | no |
+ | llama.cpp | no |
+ | hackeme | raised |
+ | | raised |
+
+ Scenario Outline: OAI Compatibility
+ Given a system prompt test
+ And a user prompt test
+ And a model test
+ And 2 max tokens to predict
+ And streaming is disabled
+ And a user api key <api_key>
+ Given an OAI compatible chat completions request with <api_error> api error
+
+ Examples: Prompts
+ | api_key | api_error |
+ | llama.cpp | no |
+ | llama.cpp | no |
+ | hackme | raised |
+
+
+ Scenario Outline: CORS Options
+ When an OPTIONS request is sent from <origin>
+ Then CORS header <cors_header> is set to <cors_header_value>
+
+ Examples: Headers
+ | origin | cors_header | cors_header_value |
+ | localhost | Access-Control-Allow-Origin | localhost |
+ | web.mydomain.fr | Access-Control-Allow-Origin | web.mydomain.fr |
+ | origin | Access-Control-Allow-Credentials | true |
+ | web.mydomain.fr | Access-Control-Allow-Methods | POST |
+ | web.mydomain.fr | Access-Control-Allow-Headers | * |