summaryrefslogtreecommitdiff
path: root/examples/server
diff options
context:
space:
mode:
authorKarthik Sethuraman <k.seth1993@gmail.com>2023-12-29 06:22:10 -0800
committerGitHub <noreply@github.com>2023-12-29 16:22:10 +0200
commitb93edd22f55d3e5268263c3edcdae1818505c078 (patch)
treed4519850dfd72170db4488ce1bb9e973130d91d5 /examples/server
parent82d6eab224862a7044069fb9211dc4b29124264b (diff)
server : allow to generate multimodal embeddings (#4681)
Diffstat (limited to 'examples/server')
-rw-r--r--examples/server/README.md4
-rw-r--r--examples/server/server.cpp12
2 files changed, 14 insertions, 2 deletions
diff --git a/examples/server/README.md b/examples/server/README.md
index f1e586a1..718a7e06 100644
--- a/examples/server/README.md
+++ b/examples/server/README.md
@@ -166,7 +166,7 @@ node index.js
`n_probs`: If greater than 0, the response also contains the probabilities of top N tokens for each generated token (default: 0)
- `image_data`: An array of objects to hold base64-encoded image `data` and its `id`s to be reference in `prompt`. You can determine the place of the image in the prompt as in the following: `USER:[img-12]Describe the image in detail.\nASSISTANT:` In this case, `[img-12]` will be replaced by the embeddings of the image id 12 in the following `image_data` array: `{..., "image_data": [{"data": "<BASE64_STRING>", "id": 12}]}`. Use `image_data` only with multimodal models, e.g., LLaVA.
+ `image_data`: An array of objects to hold base64-encoded image `data` and its `id`s to be reference in `prompt`. You can determine the place of the image in the prompt as in the following: `USER:[img-12]Describe the image in detail.\nASSISTANT:`. In this case, `[img-12]` will be replaced by the embeddings of the image with id `12` in the following `image_data` array: `{..., "image_data": [{"data": "<BASE64_STRING>", "id": 12}]}`. Use `image_data` only with multimodal models, e.g., LLaVA.
*Result JSON:*
@@ -224,6 +224,8 @@ node index.js
`content`: Set the text to process.
+ `image_data`: An array of objects to hold base64-encoded image `data` and its `id`s to be reference in `content`. You can determine the place of the image in the content as in the following: `Image: [img-21].\nCaption: This is a picture of a house`. In this case, `[img-21]` will be replaced by the embeddings of the image with id `21` in the following `image_data` array: `{..., "image_data": [{"data": "<BASE64_STRING>", "id": 21}]}`. Use `image_data` only with multimodal models, e.g., LLaVA.
+
- **POST** `/infill`: For code infilling. Takes a prefix and a suffix and returns the predicted completion as stream.
*Options:*
diff --git a/examples/server/server.cpp b/examples/server/server.cpp
index c5035e20..31b8cf33 100644
--- a/examples/server/server.cpp
+++ b/examples/server/server.cpp
@@ -3077,7 +3077,17 @@ int main(int argc, char **argv)
{
prompt = "";
}
- const int task_id = llama.request_completion({ {"prompt", prompt}, { "n_predict", 0} }, false, true, -1);
+
+ json image_data;
+ if (body.count("image_data") != 0) {
+ image_data = body["image_data"];
+ }
+ else
+ {
+ image_data = "";
+ }
+
+ const int task_id = llama.request_completion({ {"prompt", prompt}, { "n_predict", 0}, {"image_data", image_data} }, false, true, -1);
task_result result = llama.next_result(task_id);
return res.set_content(result.result_json.dump(), "application/json; charset=utf-8");
});