From 154e0d75fccf1784fe9ff6fd76a630b66563da3d Mon Sep 17 00:00:00 2001 From: Kawrakow <48489457+ikawrakow@users.noreply.github.com> Date: Sat, 27 Jul 2024 07:55:01 +0200 Subject: Merge mainline llama.cpp (#3) * Merging mainline - WIP * Merging mainline - WIP AVX2 and CUDA appear to work. CUDA performance seems slightly (~1-2%) lower as it is so often the case with llama.cpp/ggml after some "improvements" have been made. * Merging mainline - fix Metal * Remove check --------- Co-authored-by: Iwan Kawrakow --- spm-headers/ggml-alloc.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'spm-headers/ggml-alloc.h') diff --git a/spm-headers/ggml-alloc.h b/spm-headers/ggml-alloc.h index a49d385a..0361ffc3 120000 --- a/spm-headers/ggml-alloc.h +++ b/spm-headers/ggml-alloc.h @@ -1 +1 @@ -../ggml-alloc.h \ No newline at end of file +../ggml/include/ggml-alloc.h \ No newline at end of file -- cgit v1.2.3