index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml-cuda.h
Age
Commit message (
Expand
)
Author
2024-01-16
ggml : introduce GGML_CALL function annotation (#4850)
Justine Tunney
2024-01-12
llama : ggml-backend integration (#4766)
slaren
2023-12-07
sync : ggml (new ops, tests, backend, etc.) (#4359)
Georgi Gerganov
2023-11-07
cuda : supports running on CPU for GGML_USE_CUBLAS=ON build (#3946)
Meng Zhang
2023-10-08
sync : ggml (ggml-backend) (#3548)
Georgi Gerganov
2023-09-28
llama : custom attention mask + parallel decoding + no context swaps (#3228)
Georgi Gerganov
2023-08-25
ROCm Port (#1087)
Henri Vasserman
2023-08-22
ggml-cuda : use graph allocator (#2684)
slaren
2023-08-18
llama : add benchmark example (#2626)
slaren
2023-07-31
CUDA: mmq CLI option, fixed mmq build issues (#2453)
Johannes Gäßler
2023-07-01
Better CUDA synchronization logic (#2057)
Johannes Gäßler
2023-06-28
CUDA GPU acceleration for LoRAs + f16 models (#1970)
Johannes Gäßler
2023-06-14
CUDA full GPU acceleration, KV cache in VRAM (#1827)
Johannes Gäßler
2023-06-12
Leverage mmap for offloading tensors to GPU (#1597)
Howard Su
2023-06-06
Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)
Johannes Gäßler
2023-05-20
cuda : loading models directly into VRAM, norm calculation on GPU, broadcasti...
Johannes Gäßler
2023-05-13
ggml : GPU-accelerated token generation (#1412)
Johannes Gäßler
2023-05-01
cuBLAS: refactor and optimize f16 mat mul performance (#1259)
slaren
2023-04-29
cuBLAS: use host pinned memory and dequantize while copying (#1207)
slaren
2023-04-29
cuBLAS: non-contiguous tensor support (#1215)
Henri Vasserman
2023-04-28
Remove Q4_3 which is no better than Q5 (#1218)
Stephan Walter
2023-04-26
ggml : add Q5_0 and Q5_1 quantization (#1187)
Georgi Gerganov
2023-04-25
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (...
Georgi Gerganov
2023-04-21
Improve cuBLAS performance by using a memory pool (#1094)
slaren
2023-04-20
Add Q4_3 support to cuBLAS (#1086)
slaren
2023-04-20
Improve cuBLAS performance by dequantizing on the GPU (#1065)
slaren