diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2024-03-07 11:41:53 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-03-07 11:41:53 +0200 |
commit | 2002bc96bf2cbf5ab981a17d7e994d817c9801f5 (patch) | |
tree | e96b820fcd091c19ebbbae353c5358d9978cc830 /examples/server/tests/features/embeddings.feature | |
parent | ceca1aef0738b57951cd12c603c3477e75312dec (diff) |
server : refactor (#5882)
* server : refactoring (wip)
* server : remove llava/clip objects from build
* server : fix empty prompt handling + all slots idle logic
* server : normalize id vars
* server : code style
* server : simplify model chat template validation
* server : code style
* server : minor
* llama : llama_chat_apply_template support null buf
* server : do not process embedding requests when disabled
* server : reorganize structs and enums + naming fixes
* server : merge oai.hpp in utils.hpp
* server : refactor system prompt update at start
* server : disable cached prompts with self-extend
* server : do not process more than n_batch tokens per iter
* server: tests: embeddings use a real embeddings model (#5908)
* server, tests : bump batch to fit 1 embedding prompt
* server: tests: embeddings fix build type Debug is randomly failing (#5911)
* server: tests: embeddings, use different KV Cache size
* server: tests: embeddings, fixed prompt do not exceed n_batch, increase embedding timeout, reduce number of concurrent embeddings
* server: tests: embeddings, no need to wait for server idle as it can timout
* server: refactor: clean up http code (#5912)
* server : avoid n_available var
ggml-ci
* server: refactor: better http codes
* server : simplify json parsing + add comment about t_last
* server : rename server structs
* server : allow to override FQDN in tests
ggml-ci
* server : add comments
---------
Co-authored-by: Pierrick Hymbert <pierrick.hymbert@gmail.com>
Diffstat (limited to 'examples/server/tests/features/embeddings.feature')
-rw-r--r-- | examples/server/tests/features/embeddings.feature | 94 |
1 files changed, 94 insertions, 0 deletions
diff --git a/examples/server/tests/features/embeddings.feature b/examples/server/tests/features/embeddings.feature new file mode 100644 index 00000000..b47661e9 --- /dev/null +++ b/examples/server/tests/features/embeddings.feature @@ -0,0 +1,94 @@ +@llama.cpp +@embeddings +Feature: llama.cpp server + + Background: Server startup + Given a server listening on localhost:8080 + And a model file bert-bge-small/ggml-model-f16.gguf from HF repo ggml-org/models + And a model alias bert-bge-small + And 42 as server seed + And 2 slots + And 1024 as batch size + And 2048 KV cache size + And embeddings extraction + Then the server is starting + Then the server is healthy + + Scenario: Embedding + When embeddings are computed for: + """ + What is the capital of Bulgaria ? + """ + Then embeddings are generated + + Scenario: OAI Embeddings compatibility + Given a model bert-bge-small + When an OAI compatible embeddings computation request for: + """ + What is the capital of Spain ? + """ + Then embeddings are generated + + Scenario: OAI Embeddings compatibility with multiple inputs + Given a model bert-bge-small + Given a prompt: + """ + In which country Paris is located ? + """ + And a prompt: + """ + Is Madrid the capital of Spain ? + """ + When an OAI compatible embeddings computation request for multiple inputs + Then embeddings are generated + + Scenario: Multi users embeddings + Given a prompt: + """ + Write a very long story about AI. + """ + And a prompt: + """ + Write another very long music lyrics. + """ + And a prompt: + """ + Write a very long poem. + """ + And a prompt: + """ + Write a very long joke. + """ + Given concurrent embedding requests + Then the server is busy + Then the server is idle + Then all embeddings are generated + + Scenario: Multi users OAI compatibility embeddings + Given a prompt: + """ + In which country Paris is located ? + """ + And a prompt: + """ + Is Madrid the capital of Spain ? + """ + And a prompt: + """ + What is the biggest US city ? + """ + And a prompt: + """ + What is the capital of Bulgaria ? + """ + And a model bert-bge-small + Given concurrent OAI embedding requests + Then the server is busy + Then the server is idle + Then all embeddings are generated + + Scenario: All embeddings should be the same + Given 10 fixed prompts + And a model bert-bge-small + Given concurrent OAI embedding requests + Then all embeddings are the same |