summaryrefslogtreecommitdiff
path: root/examples/server/tests/features/slotsave.feature
diff options
context:
space:
mode:
authorjaime-m-p <167997752+jaime-m-p@users.noreply.github.com>2024-05-21 14:39:48 +0200
committerGitHub <noreply@github.com>2024-05-21 14:39:48 +0200
commitd7e852c1bc8e85bf62a6f1aede08cd2de723404a (patch)
tree46323a83d73f66727459aee88a995e946a78e005 /examples/server/tests/features/slotsave.feature
parent917dc8cfa67a72fb7c8bf7392270da3bf4833af4 (diff)
Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) (#7425)
* Update brute force test: add_special * Update brute force test: default values for add_bos_token and add_eos_token * Enable rtrim when pre-inserting BOS Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Revert "server : fix test regexes"
Diffstat (limited to 'examples/server/tests/features/slotsave.feature')
-rw-r--r--examples/server/tests/features/slotsave.feature4
1 files changed, 2 insertions, 2 deletions
diff --git a/examples/server/tests/features/slotsave.feature b/examples/server/tests/features/slotsave.feature
index ba4ecb6f..1c281c07 100644
--- a/examples/server/tests/features/slotsave.feature
+++ b/examples/server/tests/features/slotsave.feature
@@ -26,7 +26,7 @@ Feature: llama.cpp server slot management
# Since we have cache, this should only process the last tokens
Given a user prompt "What is the capital of Germany?"
And a completion request with no api error
- Then 24 tokens are predicted matching (Thank|special|Lily)
+ Then 24 tokens are predicted matching (Thank|special)
And 7 prompt tokens are processed
# Loading the original cache into slot 0,
# we should only be processing 1 prompt token and get the same output
@@ -41,7 +41,7 @@ Feature: llama.cpp server slot management
Given a user prompt "What is the capital of Germany?"
And using slot id 1
And a completion request with no api error
- Then 24 tokens are predicted matching (Thank|special|Lily)
+ Then 24 tokens are predicted matching (Thank|special)
And 1 prompt tokens are processed
Scenario: Erase Slot