summaryrefslogtreecommitdiff
path: root/examples/server/tests/features/server.feature
diff options
context:
space:
mode:
authorKawrakow <48489457+ikawrakow@users.noreply.github.com>2024-07-27 07:55:01 +0200
committerGitHub <noreply@github.com>2024-07-27 07:55:01 +0200
commit154e0d75fccf1784fe9ff6fd76a630b66563da3d (patch)
tree81ce6dbb5b1900c1aa78a879f0593c694cab9d27 /examples/server/tests/features/server.feature
parent0684c3e9c70d49323b4fc517128cbe222cab7f96 (diff)
Merge mainline llama.cpp (#3)
* Merging mainline - WIP * Merging mainline - WIP AVX2 and CUDA appear to work. CUDA performance seems slightly (~1-2%) lower as it is so often the case with llama.cpp/ggml after some "improvements" have been made. * Merging mainline - fix Metal * Remove check --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'examples/server/tests/features/server.feature')
-rw-r--r--examples/server/tests/features/server.feature2
1 files changed, 1 insertions, 1 deletions
diff --git a/examples/server/tests/features/server.feature b/examples/server/tests/features/server.feature
index d21c0913..b5597145 100644
--- a/examples/server/tests/features/server.feature
+++ b/examples/server/tests/features/server.feature
@@ -82,7 +82,7 @@ Feature: llama.cpp server
Examples: Prompts
| response_format | n_predicted | re_content |
- | {"type": "json_object", "schema": {"const": "42"}} | 5 | "42" |
+ | {"type": "json_object", "schema": {"const": "42"}} | 6 | "42" |
| {"type": "json_object", "schema": {"items": [{"type": "integer"}]}} | 10 | \[ -300 \] |
| {"type": "json_object"} | 10 | \{ " Jacky. |