diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2024-03-07 11:41:53 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-03-07 11:41:53 +0200 |
commit | 2002bc96bf2cbf5ab981a17d7e994d817c9801f5 (patch) | |
tree | e96b820fcd091c19ebbbae353c5358d9978cc830 /llama.cpp | |
parent | ceca1aef0738b57951cd12c603c3477e75312dec (diff) |
server : refactor (#5882)
* server : refactoring (wip)
* server : remove llava/clip objects from build
* server : fix empty prompt handling + all slots idle logic
* server : normalize id vars
* server : code style
* server : simplify model chat template validation
* server : code style
* server : minor
* llama : llama_chat_apply_template support null buf
* server : do not process embedding requests when disabled
* server : reorganize structs and enums + naming fixes
* server : merge oai.hpp in utils.hpp
* server : refactor system prompt update at start
* server : disable cached prompts with self-extend
* server : do not process more than n_batch tokens per iter
* server: tests: embeddings use a real embeddings model (#5908)
* server, tests : bump batch to fit 1 embedding prompt
* server: tests: embeddings fix build type Debug is randomly failing (#5911)
* server: tests: embeddings, use different KV Cache size
* server: tests: embeddings, fixed prompt do not exceed n_batch, increase embedding timeout, reduce number of concurrent embeddings
* server: tests: embeddings, no need to wait for server idle as it can timout
* server: refactor: clean up http code (#5912)
* server : avoid n_available var
ggml-ci
* server: refactor: better http codes
* server : simplify json parsing + add comment about t_last
* server : rename server structs
* server : allow to override FQDN in tests
ggml-ci
* server : add comments
---------
Co-authored-by: Pierrick Hymbert <pierrick.hymbert@gmail.com>
Diffstat (limited to 'llama.cpp')
-rw-r--r-- | llama.cpp | 6 |
1 files changed, 5 insertions, 1 deletions
@@ -13541,18 +13541,22 @@ LLAMA_API int32_t llama_chat_apply_template( curr_tmpl = std::string(model_template.data(), model_template.size()); } } + // format the chat to string std::vector<const llama_chat_message *> chat_vec; chat_vec.resize(n_msg); for (size_t i = 0; i < n_msg; i++) { chat_vec[i] = &chat[i]; } + std::string formatted_chat; int32_t res = llama_chat_apply_template_internal(curr_tmpl, chat_vec, formatted_chat, add_ass); if (res < 0) { return res; } - strncpy(buf, formatted_chat.c_str(), length); + if (buf && length > 0) { + strncpy(buf, formatted_chat.c_str(), length); + } return res; } |