Age | Commit message (Collapse) | Author |
|
* Merging mainline - WIP
* Merging mainline - WIP
AVX2 and CUDA appear to work.
CUDA performance seems slightly (~1-2%) lower as it is so often
the case with llama.cpp/ggml after some "improvements" have been made.
* Merging mainline - fix Metal
* Remove check
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
|
|
* Revert "swift : update Package.swift to use ggml as dependency (#4691)"
This reverts commit ece9a45e8ffb73ad461c792720c2fec28b0137bc.
* spm : add ggml headers
|
|
ggml-ci
|
|
* Ignore metal file in spm
* Add ggml.h to spm public Headers
---------
Co-authored-by: Vogel Frederik <vogel.frederik@linecorp.com>
|
|
* Add a Package.swift for SwiftPM support
* Swap from exclusions to allowlist
|