index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml-metal.h
Age
Commit message (
Expand
)
Author
2024-01-16
metal : localized logic in `ggml_metal_graph_compute` (#4924)
Paul Tsochantaris
2024-01-16
ggml : introduce GGML_CALL function annotation (#4850)
Justine Tunney
2024-01-13
metal : remove old API (#4919)
Georgi Gerganov
2024-01-05
ggml : add error handling to graph_compute (whisper/1714)
Finn Voorhees
2023-12-21
llama : initial ggml-backend integration (#4520)
slaren
2023-12-07
sync : ggml (new ops, tests, backend, etc.) (#4359)
Georgi Gerganov
2023-11-13
ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060)
Georgi Gerganov
2023-10-08
sync : ggml (ggml-backend) (#3548)
Georgi Gerganov
2023-09-27
metal : reusing llama.cpp logging (#3152)
Rickard Hallerbäck
2023-08-28
metal : fix memory leak (#2762)
Georgi Gerganov
2023-08-21
gguf : new file format with flexible meta data (beta) (#2398)
Georgi Gerganov
2023-08-16
metal : enable ggml-alloc (#2627)
Shouzheng Liu
2023-07-25
metal : concurrently dispatch commands (#2358)
Shouzheng Liu
2023-07-07
ggml : change ggml_graph_compute() API to not require context (#1999)
Qingyou Meng
2023-06-18
metal : handle buffers larger than device's maxBufferLength (#1826)
Georgi Gerganov
2023-06-15
metal : parallel command buffer encoding (#1860)
Georgi Gerganov
2023-06-04
llama : Metal inference (#1642)
Georgi Gerganov