diff options
author | Neo Zhang Jianyu <jianyu.zhang@intel.com> | 2024-03-22 15:19:37 +0800 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-03-22 15:19:37 +0800 |
commit | 59c17f02de8fdf7b084d6100b875b7e2bc07a83b (patch) | |
tree | a10bf4594b35fd1f1b92190f6c68adf22d408822 | |
parent | fa046eafbc70bf97dcf39843af0323f19a8c9ac3 (diff) |
add blog link (#6222)
-rw-r--r-- | README-sycl.md | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/README-sycl.md b/README-sycl.md index 501b9d48..cbf14f2d 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -29,6 +29,7 @@ For Intel CPU, recommend to use llama.cpp for X86 (Intel MKL building). ## News - 2024.3 + - A blog is published: **Run LLM on all Intel GPUs Using llama.cpp**: [intel.com](https://www.intel.com/content/www/us/en/developer/articles/technical/run-llm-on-all-gpus-using-llama-cpp-artical.html) or [medium.com](https://medium.com/@jianyu_neo/run-llm-on-all-intel-gpus-using-llama-cpp-fd2e2dcbd9bd). - New base line is ready: [tag b2437](https://github.com/ggerganov/llama.cpp/tree/b2437). - Support multiple cards: **--split-mode**: [none|layer]; not support [row], it's on developing. - Support to assign main GPU by **--main-gpu**, replace $GGML_SYCL_DEVICE. |