summaryrefslogtreecommitdiff
path: root/ggml-cuda.cu
AgeCommit message (Expand)Author
2024-02-01cuda : fix LLAMA_CUDA_F16 (#5262)slaren
2024-01-31llava : add MobileVLM support (#5132)JidongZhang-THU
2024-01-30sync : ggml (#0)Georgi Gerganov
2024-01-30`ggml_cuda_cpy` support for 4d tensors and float16->float32 upcasting (ggml/686)John Balis
2024-01-30SOTA 3-bit quants (#5196)Kawrakow
2024-01-28ggml : add Vulkan backend (#2059)0cc4m
2024-01-26cuda : fix tensor size calculation for non-split buffer (#5145)slaren
2024-01-24cuda : fix 2-bit quants on amd hip (#5105)Engininja2
2024-01-23CUDA: more info when no device code (#5088)Johannes Gäßler
2024-01-20cuda : fix compile error in jetson platform (#4975)Kylin
2024-01-17ggml : add IQ2 to test-backend-ops + refactoring (#4990)Georgi Gerganov
2024-01-16ggml : introduce GGML_CALL function annotation (#4850)Justine Tunney
2024-01-15cuda : fix dequantize kernel names (#4938)Georgi Gerganov
2024-01-15CUDA: faster dequantize kernels for Q4_0 and Q4_1 (#4938)Kawrakow
2024-01-12CUDA: faster q8_0 -> f16 dequantization (#4895)Johannes Gäßler
2024-01-12llama : ggml-backend integration (#4766)slaren
2024-01-12CUDA: fix softmax compile for old CUDA versions (#4862)Johannes Gäßler
2024-01-11ggml : SOTA 2-bit quants (add IQ2_XS) (#4856)Kawrakow
2024-01-11fix : cuda order of synchronization when setting a buffer (ggml/679)Erik Scholz
2024-01-09CUDA: faster softmax via shared memory + fp16 math (#4742)Johannes Gäßler
2024-01-08SOTA 2-bit quants (#4773)Kawrakow
2024-01-07CUDA: fixed redundant value dequantization (#4809)Johannes Gäßler
2024-01-07ggml : use __builtin_amdgcn_sudot4 in __dp4a for gfx11 (#4787)Konstantin Zhuravlyov
2024-01-05ggml : add error handling to graph_compute (whisper/1714)Finn Voorhees
2024-01-03cuda : simplify expressionGeorgi Gerganov
2024-01-03cuda : mark I16 and I32 ops as unsupportedGeorgi Gerganov
2023-12-30CUDA: fixed tensor cores not being used on RDNA3 (#4697)Johannes Gäßler
2023-12-29CUDA: fix tensor core logic for Pascal and HIP (#4682)Johannes Gäßler
2023-12-29cuda: fix vmm oom issue on NVIDIA AGX Orin (#4687)hydai
2023-12-29ggml : fix some mul mat cases + add tests for src1 F16 (ggml/669)bssrdf
2023-12-26cuda : fix vmm pool with multi GPU (#4620)slaren
2023-12-26Fix new CUDA10 compilation errors (#4635)FantasyGmm
2023-12-24cuda : improve cuda pool efficiency using virtual memory (#4606)slaren
2023-12-23fallback to CPU buffer if host buffer alloc fails (#4610)slaren
2023-12-23CUDA: fixed row rounding for 0 tensor splits (#4594)Johannes Gäßler
2023-12-22sync : ggml (fix im2col) (#4591)Georgi Gerganov
2023-12-22cuda : fix jetson compile error (#4560)FantasyGmm
2023-12-22Fix CudaMemcpy direction (#4599)Henrik Forstén
2023-12-22llama : fix platforms without mmap (#4578)slaren
2023-12-21ggml : change ggml_scale to take a float instead of tensor (#4573)Georgi Gerganov
2023-12-21llama : initial ggml-backend integration (#4520)slaren
2023-12-21cuda : ROCm AMD Unified Memory Architecture (UMA) handling (#4449)Erik Garrison
2023-12-21ggml-cuda: Fix HIP build by adding define for __trap (#4569)arlo-phoenix
2023-12-21CUDA: mul_mat_id always on GPU for batches >= 32 (#4553)Johannes Gäßler
2023-12-21cuda : better error message for ggml_get_rows (#4561)bobqianic
2023-12-21cuda : replace asserts in wrong architecture checks with __trap (#4556)slaren
2023-12-21Fix access violation in ggml_cuda_free_data if tensor->extra is NULL (#4554)LoganDark
2023-12-20CUDA: Faster Mixtral prompt processing (#4538)Johannes Gäßler
2023-12-18ggml-cuda: Fix HIP build (#4528)arlo-phoenix
2023-12-18llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)Ebey Abraham