diff options
author | Jared Van Bortel <jared@nomic.ai> | 2024-01-29 15:50:50 -0500 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-01-29 15:50:50 -0500 |
commit | fbf1ddec69f7001cc707de17fa74d7200813bbac (patch) | |
tree | 55ff0324a0fe0dfc3de70d232a29b04926657ae1 /llama.h | |
parent | 2aed77eb06a329f0d82bb1c467f4244904d4073f (diff) |
Nomic Vulkan backend (#4456)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: ToKiNoBug <tokinobug@163.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
Diffstat (limited to 'llama.h')
-rw-r--r-- | llama.h | 3 |
1 files changed, 2 insertions, 1 deletions
@@ -49,7 +49,8 @@ #define LLAMA_SESSION_MAGIC LLAMA_FILE_MAGIC_GGSN #define LLAMA_SESSION_VERSION 4 -#if defined(GGML_USE_CUBLAS) || defined(GGML_USE_CLBLAST) || defined(GGML_USE_METAL) || defined(GGML_USE_VULKAN) || defined(GGML_USE_SYCL) +#if defined(GGML_USE_CUBLAS) || defined(GGML_USE_CLBLAST) || defined(GGML_USE_METAL) || defined(GGML_USE_VULKAN) || \ + defined(GGML_USE_SYCL) || defined(GGML_USE_KOMPUTE) // Defined when llama.cpp is compiled with support for offloading model layers to GPU. #define LLAMA_SUPPORTS_GPU_OFFLOAD #endif |