diff options
author | 0cc4m <picard12@live.de> | 2024-03-05 13:33:42 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-03-05 13:33:42 +0100 |
commit | 61d1c88e155515dd03940913a5707ea84a8b119b (patch) | |
tree | c2c7de9900b33a73a6fba4299523b54528676e1f /llama.cpp | |
parent | 21b08674331e1ea1b599f17c5ca91f0ed173be31 (diff) |
Vulkan Improvements (#5835)
* Improve dequant shaders, add fast q4_0 dequant
* Optimize dmmv non-kquants for GCN
Remove unnecessary SPIR-V shader duplication
* Fix q4_0 dequant dispatch sizes
Fix backend free bug
* Optimize dequant shaders for q4_1, q5_0, q5_1 and q8_0
* Add unary and binary op shader templates
* Fix Vulkan check results
* Enable non-contiguous support for simple ops
* Add argsort
Basic q4_0 mmq shader and unit test
* Speed up q4_0 dequant code, enable mmq for q4_0
* Rework matmul pipeline selection
* Add soft_max alibi support
* Add q4_1, q5_0, q5_1 and q8_0 dequant mat mat mul shaders
* Add environment variable GGML_VK_FORCE_MAX_ALLOCATION_SIZE to limit max buffer size
Rename GGML_VULKAN_DISABLE_F16 to GGML_VK_DISABLE_F16 for consistency
Diffstat (limited to 'llama.cpp')
-rw-r--r-- | llama.cpp | 4 |
1 files changed, 2 insertions, 2 deletions
@@ -5014,8 +5014,8 @@ static struct ggml_tensor * llm_build_kqv( ggml_mul_mat_set_prec(kq, GGML_PREC_F32); } -#if defined(GGML_USE_VULKAN) || defined(GGML_USE_KOMPUTE) -#pragma message("TODO: ALiBi support in ggml_soft_max_ext is not implemented for Vulkan, and Kompute") +#if defined(GGML_USE_KOMPUTE) +#pragma message("TODO: ALiBi support in ggml_soft_max_ext is not implemented for Kompute") #pragma message(" Falling back to ggml_alibi(). Will become an error in Mar 2024") #pragma message("ref: https://github.com/ggerganov/llama.cpp/pull/5488") if (hparams.f_max_alibi_bias > 0.0f) { |