index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml-cuda.cu
Age
Commit message (
Expand
)
Author
2024-02-01
cuda : fix LLAMA_CUDA_F16 (#5262)
slaren
2024-01-31
llava : add MobileVLM support (#5132)
JidongZhang-THU
2024-01-30
sync : ggml (#0)
Georgi Gerganov
2024-01-30
`ggml_cuda_cpy` support for 4d tensors and float16->float32 upcasting (ggml/686)
John Balis
2024-01-30
SOTA 3-bit quants (#5196)
Kawrakow
2024-01-28
ggml : add Vulkan backend (#2059)
0cc4m
2024-01-26
cuda : fix tensor size calculation for non-split buffer (#5145)
slaren
2024-01-24
cuda : fix 2-bit quants on amd hip (#5105)
Engininja2
2024-01-23
CUDA: more info when no device code (#5088)
Johannes Gäßler
2024-01-20
cuda : fix compile error in jetson platform (#4975)
Kylin
2024-01-17
ggml : add IQ2 to test-backend-ops + refactoring (#4990)
Georgi Gerganov
2024-01-16
ggml : introduce GGML_CALL function annotation (#4850)
Justine Tunney
2024-01-15
cuda : fix dequantize kernel names (#4938)
Georgi Gerganov
2024-01-15
CUDA: faster dequantize kernels for Q4_0 and Q4_1 (#4938)
Kawrakow
2024-01-12
CUDA: faster q8_0 -> f16 dequantization (#4895)
Johannes Gäßler
2024-01-12
llama : ggml-backend integration (#4766)
slaren
2024-01-12
CUDA: fix softmax compile for old CUDA versions (#4862)
Johannes Gäßler
2024-01-11
ggml : SOTA 2-bit quants (add IQ2_XS) (#4856)
Kawrakow
2024-01-11
fix : cuda order of synchronization when setting a buffer (ggml/679)
Erik Scholz
2024-01-09
CUDA: faster softmax via shared memory + fp16 math (#4742)
Johannes Gäßler
2024-01-08
SOTA 2-bit quants (#4773)
Kawrakow
2024-01-07
CUDA: fixed redundant value dequantization (#4809)
Johannes Gäßler
2024-01-07
ggml : use __builtin_amdgcn_sudot4 in __dp4a for gfx11 (#4787)
Konstantin Zhuravlyov
2024-01-05
ggml : add error handling to graph_compute (whisper/1714)
Finn Voorhees
2024-01-03
cuda : simplify expression
Georgi Gerganov
2024-01-03
cuda : mark I16 and I32 ops as unsupported
Georgi Gerganov
2023-12-30
CUDA: fixed tensor cores not being used on RDNA3 (#4697)
Johannes Gäßler
2023-12-29
CUDA: fix tensor core logic for Pascal and HIP (#4682)
Johannes Gäßler
2023-12-29
cuda: fix vmm oom issue on NVIDIA AGX Orin (#4687)
hydai
2023-12-29
ggml : fix some mul mat cases + add tests for src1 F16 (ggml/669)
bssrdf
2023-12-26
cuda : fix vmm pool with multi GPU (#4620)
slaren
2023-12-26
Fix new CUDA10 compilation errors (#4635)
FantasyGmm
2023-12-24
cuda : improve cuda pool efficiency using virtual memory (#4606)
slaren
2023-12-23
fallback to CPU buffer if host buffer alloc fails (#4610)
slaren
2023-12-23
CUDA: fixed row rounding for 0 tensor splits (#4594)
Johannes Gäßler
2023-12-22
sync : ggml (fix im2col) (#4591)
Georgi Gerganov
2023-12-22
cuda : fix jetson compile error (#4560)
FantasyGmm
2023-12-22
Fix CudaMemcpy direction (#4599)
Henrik Forstén
2023-12-22
llama : fix platforms without mmap (#4578)
slaren
2023-12-21
ggml : change ggml_scale to take a float instead of tensor (#4573)
Georgi Gerganov
2023-12-21
llama : initial ggml-backend integration (#4520)
slaren
2023-12-21
cuda : ROCm AMD Unified Memory Architecture (UMA) handling (#4449)
Erik Garrison
2023-12-21
ggml-cuda: Fix HIP build by adding define for __trap (#4569)
arlo-phoenix
2023-12-21
CUDA: mul_mat_id always on GPU for batches >= 32 (#4553)
Johannes Gäßler
2023-12-21
cuda : better error message for ggml_get_rows (#4561)
bobqianic
2023-12-21
cuda : replace asserts in wrong architecture checks with __trap (#4556)
slaren
2023-12-21
Fix access violation in ggml_cuda_free_data if tensor->extra is NULL (#4554)
LoganDark
2023-12-20
CUDA: Faster Mixtral prompt processing (#4538)
Johannes Gäßler
2023-12-18
ggml-cuda: Fix HIP build (#4528)
arlo-phoenix
2023-12-18
llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)
Ebey Abraham
[next]