summaryrefslogtreecommitdiff
path: root/scripts/server-llm.sh
diff options
context:
space:
mode:
authorKawrakow <48489457+ikawrakow@users.noreply.github.com>2024-07-27 07:55:01 +0200
committerGitHub <noreply@github.com>2024-07-27 07:55:01 +0200
commit154e0d75fccf1784fe9ff6fd76a630b66563da3d (patch)
tree81ce6dbb5b1900c1aa78a879f0593c694cab9d27 /scripts/server-llm.sh
parent0684c3e9c70d49323b4fc517128cbe222cab7f96 (diff)
Merge mainline llama.cpp (#3)
* Merging mainline - WIP * Merging mainline - WIP AVX2 and CUDA appear to work. CUDA performance seems slightly (~1-2%) lower as it is so often the case with llama.cpp/ggml after some "improvements" have been made. * Merging mainline - fix Metal * Remove check --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'scripts/server-llm.sh')
-rw-r--r--scripts/server-llm.sh2
1 files changed, 1 insertions, 1 deletions
diff --git a/scripts/server-llm.sh b/scripts/server-llm.sh
index 19923244..802592a3 100644
--- a/scripts/server-llm.sh
+++ b/scripts/server-llm.sh
@@ -380,7 +380,7 @@ fi
if [[ "$backend" == "cuda" ]]; then
printf "[+] Building with CUDA backend\n"
- LLAMA_CUDA=1 make -j llama-server $log
+ GGML_CUDA=1 make -j llama-server $log
elif [[ "$backend" == "cpu" ]]; then
printf "[+] Building with CPU backend\n"
make -j llama-server $log