diff options
author | Justine Tunney <jtunney@mozilla.com> | 2024-05-10 07:01:08 -0400 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-05-10 21:01:08 +1000 |
commit | 4e3880978f8b1bf546dd4e6f3b524d6b8739c49c (patch) | |
tree | 54ab13653c57d8a5ecb709947dd5a43596ca64c2 /examples/llava/llava-cli.cpp | |
parent | f89fe2732c5709f6e86d5f4aee2e6d2a561f2eb2 (diff) |
Fix memory bug in grammar parser (#7194)
The llama.cpp grammar parser had a bug where forgetting to add a closing
quotation mark to strings would cause parsing to crash. Anyone running a
server on a public endpoint is advised to upgrade. To reproduce this bug
./llamafile -m foo.gguf -p bar --grammar 'root::="'
Credit for discovering and reporting this issue goes to Eclypsium
Security Researcher Richard Johnson <Richard.johnson@eclypsium.com>.
Diffstat (limited to 'examples/llava/llava-cli.cpp')
-rw-r--r-- | examples/llava/llava-cli.cpp | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/examples/llava/llava-cli.cpp b/examples/llava/llava-cli.cpp index 157a680b..da60ddf2 100644 --- a/examples/llava/llava-cli.cpp +++ b/examples/llava/llava-cli.cpp @@ -189,6 +189,11 @@ static void process_prompt(struct llava_context * ctx_llava, struct llava_image_ LOG_TEE("\n"); struct llama_sampling_context * ctx_sampling = llama_sampling_init(params->sparams); + if (!ctx_sampling) { + fprintf(stderr, "%s: failed to initialize sampling subsystem\n", __func__); + exit(1); + } + std::string response = ""; for (int i = 0; i < max_tgt_len; i++) { const char * tmp = sample(ctx_sampling, ctx_llava->ctx_llama, &n_past); |