diff options
author | Pierrick Hymbert <pierrick.hymbert@gmail.com> | 2024-03-17 19:12:37 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-03-17 19:12:37 +0100 |
commit | d01b3c4c32357567f3531d4e6ceffc5d23e87583 (patch) | |
tree | 80e0a075a8b120d6b5b095a73cc36cb2a4535aed /examples/server/tests/features/server.feature | |
parent | cd776c37c945bf58efc8fe44b370456680cb1b59 (diff) |
common: llama_load_model_from_url using --model-url (#6098)
* common: llama_load_model_from_url with libcurl dependency
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Diffstat (limited to 'examples/server/tests/features/server.feature')
-rw-r--r-- | examples/server/tests/features/server.feature | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/examples/server/tests/features/server.feature b/examples/server/tests/features/server.feature index 5014f326..7448986e 100644 --- a/examples/server/tests/features/server.feature +++ b/examples/server/tests/features/server.feature @@ -4,7 +4,8 @@ Feature: llama.cpp server Background: Server startup Given a server listening on localhost:8080 - And a model file tinyllamas/stories260K.gguf from HF repo ggml-org/models + And a model url https://huggingface.co/ggml-org/models/resolve/main/tinyllamas/stories260K.gguf + And a model file stories260K.gguf And a model alias tinyllama-2 And 42 as server seed # KV Cache corresponds to the total amount of tokens |