index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml-cuda.cu
Age
Commit message (
Expand
)
Author
2024-05-08
Introduction of CUDA Graphs to LLama.cpp (#6766)
agray3
2024-05-06
Add an option to build without CUDA VMM (#7067)
William Tambellini
2024-04-30
ggml : add Flash Attention (#5021)
Georgi Gerganov
2024-04-18
ggml : group all experts in a single ggml_mul_mat_id (#6505)
slaren
2024-04-14
CUDA: fix matrix multiplication logic for tests (#6667)
Johannes Gäßler
2024-04-09
llama : add Command R Plus support (#6491)
Carolinabanana
2024-04-07
ggml: bypass code incompatible with CUDA < 11.1 (whisper/2020)
Slava Primenko
2024-04-03
ggml : mul_mat_id use the same tensor for all the experts (#6387)
slaren
2024-03-26
llama : greatly reduce output buffer memory usage (#6122)
compilade
2024-03-26
IQ1_M: 1.75 bpw quantization (#6302)
Kawrakow
2024-03-25
cuda : refactor into multiple files (#6269)
slaren
2024-03-22
cuda : add LLAMA_CUDA_NO_PEER_COPY to workaround broken ROCm p2p copy (#6208)
slaren
2024-03-21
cuda : disable host register by default (#6206)
slaren
2024-03-21
cuda : fix LLAMA_CUDA_F16 build (#6197)
slaren
2024-03-21
Add ability to use Q5_0, Q5_1, and IQ4_NL for quantized K cache (#6183)
Kawrakow
2024-03-21
cuda : fix conflict with std::swap (#6186)
slaren
2024-03-20
cuda : print the returned error when CUDA initialization fails (#6185)
slaren
2024-03-20
cuda : refactor to remove global resources (#6170)
slaren
2024-03-18
backend : offload large batches to GPU (#6083)
slaren
2024-03-15
cuda : disable unused cudaLaunchHostFunc code (#6078)
slaren
2024-03-13
llama : add pipeline parallelism support (#6017)
slaren
2024-03-12
ggml : reuse quantum structs across backends (#5943)
Georgi Gerganov
2024-03-11
1.5 bit: we can do even better (#5999)
Kawrakow
2024-03-11
Better 1.5 bit quantization (#5971)
Kawrakow
2024-03-09
ggml : add ggml-common.h to deduplicate shared code (#5940)
Georgi Gerganov
2024-03-04
ggml : introduce ggml_status (ggml/750)
Michael Podvitskiy
2024-03-04
add some new ops, fix some operators and add batch operations to certain oper...
leejet
2024-03-03
cuda : fix data race in soft max (#5853)
slaren
2024-03-02
ggml : IQ3_S improvements (#5829)
Kawrakow
2024-02-28
Introduce backend GUIDs (ggml/743)
UEXTM.com
2024-02-28
ggml : make i-quants work with super-blocks of 64 (CPU,Metal) (#5760)
Kawrakow
2024-02-27
IQ4_XS: a 4.25 bpw quantization (#5747)
Kawrakow
2024-02-27
cuda : replace remaining shfl_xor with calls to warp_reduce functions (#5744)
Engininja2
2024-02-26
Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range...
Kawrakow
2024-02-26
CUDA: fix DEBUG_CUDA_MALLOC (#5729)
Johannes Gäßler
2024-02-25
code : normalize enum names (#5697)
Georgi Gerganov
2024-02-24
IQ3_S: a much better alternative to Q3_K (#5676)
Kawrakow
2024-02-22
ggml : always define ggml_fp16_t as uint16_t (#5666)
Georgi Gerganov
2024-02-21
IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)
Kawrakow
2024-02-19
cuda : ignore peer access already enabled errors (#5597)
slaren
2024-02-19
ci : enable -Werror for CUDA builds (#5579)
Georgi Gerganov
2024-02-19
cuda, metal : fix nans in soft_max (#5574)
slaren
2024-02-18
1.5 bit quantization (#5453)
Kawrakow
2024-02-17
ggml : add ALiBi support for ggml_soft_max_ext (#5488)
Georgi Gerganov
2024-02-15
cuda : print message when initialization fails (#5512)
slaren
2024-02-11
CUDA: mul_mat_vec_q tiling, refactor mul mat logic (#5434)
Johannes Gäßler
2024-02-08
CUDA: more warps for mmvq on NVIDIA (#5394)
Johannes Gäßler
2024-02-07
CUDA: fixed mmvq kernel for bs 2,3,4 and -sm row (#5386)
Johannes Gäßler
2024-02-06
CUDA: mul_mat_vec_q max. batch size 8 -> 4 (#5370)
Johannes Gäßler
2024-02-06
CUDA: mul_mat_vec_q for batch sizes > 1 (#5351)
Johannes Gäßler
[next]