summaryrefslogtreecommitdiff
path: root/ggml-cuda.cu
AgeCommit message (Expand)Author
2024-05-08Introduction of CUDA Graphs to LLama.cpp (#6766)agray3
2024-05-06Add an option to build without CUDA VMM (#7067)William Tambellini
2024-04-30ggml : add Flash Attention (#5021)Georgi Gerganov
2024-04-18ggml : group all experts in a single ggml_mul_mat_id (#6505)slaren
2024-04-14CUDA: fix matrix multiplication logic for tests (#6667)Johannes Gäßler
2024-04-09llama : add Command R Plus support (#6491)Carolinabanana
2024-04-07ggml: bypass code incompatible with CUDA < 11.1 (whisper/2020)Slava Primenko
2024-04-03ggml : mul_mat_id use the same tensor for all the experts (#6387)slaren
2024-03-26llama : greatly reduce output buffer memory usage (#6122)compilade
2024-03-26IQ1_M: 1.75 bpw quantization (#6302)Kawrakow
2024-03-25cuda : refactor into multiple files (#6269)slaren
2024-03-22cuda : add LLAMA_CUDA_NO_PEER_COPY to workaround broken ROCm p2p copy (#6208)slaren
2024-03-21cuda : disable host register by default (#6206)slaren
2024-03-21cuda : fix LLAMA_CUDA_F16 build (#6197)slaren
2024-03-21Add ability to use Q5_0, Q5_1, and IQ4_NL for quantized K cache (#6183)Kawrakow
2024-03-21cuda : fix conflict with std::swap (#6186)slaren
2024-03-20cuda : print the returned error when CUDA initialization fails (#6185)slaren
2024-03-20cuda : refactor to remove global resources (#6170)slaren
2024-03-18backend : offload large batches to GPU (#6083)slaren
2024-03-15cuda : disable unused cudaLaunchHostFunc code (#6078)slaren
2024-03-13llama : add pipeline parallelism support (#6017)slaren
2024-03-12ggml : reuse quantum structs across backends (#5943)Georgi Gerganov
2024-03-111.5 bit: we can do even better (#5999)Kawrakow
2024-03-11Better 1.5 bit quantization (#5971)Kawrakow
2024-03-09ggml : add ggml-common.h to deduplicate shared code (#5940)Georgi Gerganov
2024-03-04ggml : introduce ggml_status (ggml/750)Michael Podvitskiy
2024-03-04add some new ops, fix some operators and add batch operations to certain oper...leejet
2024-03-03cuda : fix data race in soft max (#5853)slaren
2024-03-02ggml : IQ3_S improvements (#5829)Kawrakow
2024-02-28Introduce backend GUIDs (ggml/743)UEXTM.com
2024-02-28ggml : make i-quants work with super-blocks of 64 (CPU,Metal) (#5760)Kawrakow
2024-02-27IQ4_XS: a 4.25 bpw quantization (#5747)Kawrakow
2024-02-27cuda : replace remaining shfl_xor with calls to warp_reduce functions (#5744)Engininja2
2024-02-26Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range...Kawrakow
2024-02-26CUDA: fix DEBUG_CUDA_MALLOC (#5729)Johannes Gäßler
2024-02-25code : normalize enum names (#5697)Georgi Gerganov
2024-02-24IQ3_S: a much better alternative to Q3_K (#5676)Kawrakow
2024-02-22ggml : always define ggml_fp16_t as uint16_t (#5666)Georgi Gerganov
2024-02-21IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)Kawrakow
2024-02-19cuda : ignore peer access already enabled errors (#5597)slaren
2024-02-19ci : enable -Werror for CUDA builds (#5579)Georgi Gerganov
2024-02-19cuda, metal : fix nans in soft_max (#5574)slaren
2024-02-181.5 bit quantization (#5453)Kawrakow
2024-02-17ggml : add ALiBi support for ggml_soft_max_ext (#5488)Georgi Gerganov
2024-02-15cuda : print message when initialization fails (#5512)slaren
2024-02-11CUDA: mul_mat_vec_q tiling, refactor mul mat logic (#5434)Johannes Gäßler
2024-02-08CUDA: more warps for mmvq on NVIDIA (#5394)Johannes Gäßler
2024-02-07CUDA: fixed mmvq kernel for bs 2,3,4 and -sm row (#5386)Johannes Gäßler
2024-02-06CUDA: mul_mat_vec_q max. batch size 8 -> 4 (#5370)Johannes Gäßler
2024-02-06CUDA: mul_mat_vec_q for batch sizes > 1 (#5351)Johannes Gäßler