From 37c746d687d877bc11803e96b4dc5f378b83c0a0 Mon Sep 17 00:00:00 2001 From: Shijie <821898965@qq.com> Date: Sat, 2 Dec 2023 02:16:31 +0800 Subject: llama : add Qwen support (#4281) * enable qwen to llama.cpp * llama : do not GPU split bias tensors --------- Co-authored-by: Georgi Gerganov --- prompts/chat-with-qwen.txt | 1 + 1 file changed, 1 insertion(+) create mode 100644 prompts/chat-with-qwen.txt (limited to 'prompts') diff --git a/prompts/chat-with-qwen.txt b/prompts/chat-with-qwen.txt new file mode 100644 index 00000000..ac39ad92 --- /dev/null +++ b/prompts/chat-with-qwen.txt @@ -0,0 +1 @@ +You are a helpful assistant. \ No newline at end of file -- cgit v1.2.3