summaryrefslogtreecommitdiff
path: root/ggml.c
AgeCommit message (Expand)Author
2024-02-12sync : ggml (#5452)Georgi Gerganov
2024-02-11ggml : add mmla kernels for quantized GEMM (#4966)snadampal
2024-02-10ggml : add abort_callback for cpu backend (ggml/725)Michael Podvitskiy
2024-02-07Basic Vulkan Multi-GPU implementation (#5321)0cc4m
2024-02-05ggml : avoid duplicating function calls using MIN/MAX macros (#5325)Dr. Tom Murphy VII Ph.D
2024-01-31llava : add MobileVLM support (#5132)JidongZhang-THU
2024-01-31ggml : limit n_threads to the max n_tasks (#5238)slaren
2024-01-30kompute : llama-bench support and ggml_cpu_has_kompute() (#5226)Jared Van Bortel
2024-01-30gguf : fix comparison (ggml/715)Georgi Gerganov
2024-01-30gguf : add input validation, prevent integer overflows (ggml/709)Georgi Gerganov
2024-01-30SOTA 3-bit quants (#5196)Kawrakow
2024-01-28ggml : minor type fix (int64_t -> size_t)Georgi Gerganov
2024-01-28ggml : add Vulkan backend (#2059)0cc4m
2024-01-28ggml : add unified SYCL backend for Intel GPUs (#2690)Abhilash Majumder
2024-01-27ggml : check ggml_add src1 type (ggml/708)Judd
2024-01-26Add OpenCL add kernel (#5151)0cc4m
2024-01-26ggml : update softmax n_task calculation (#5126)snadampal
2024-01-23minor : clean-up some warnings and style (#5094)Georgi Gerganov
2024-01-22ggml : parallelize FP32 conversion when using BLAS (#5045)Reinforce-II
2024-01-22llava : MobileVLM support (#4954)XiaotaoChen
2024-01-17ggml : add IQ2 to test-backend-ops + refactoring (#4990)Georgi Gerganov
2024-01-17imatrix : offload to GPU support (#4957)Georgi Gerganov
2024-01-16ggml : importance matrix support for legacy quants (#4969)Kawrakow
2024-01-16ggml : introduce GGML_CALL function annotation (#4850)Justine Tunney
2024-01-14Add ability to use importance matrix for all k-quants (#4930)Kawrakow
2024-01-142-bit quantizations (#4897)Kawrakow
2024-01-13ggml: cache sin/cos for RoPE (#4908)Johannes Gäßler
2024-01-13gguf : fix potential infinite for-loop (#4600)texmex76
2024-01-12llama : ggml-backend integration (#4766)slaren
2024-01-12Importance Matrix calculation (#4861)Kawrakow
2024-01-11ggml : SOTA 2-bit quants (add IQ2_XS) (#4856)Kawrakow
2024-01-11ggml : remove ggml_cpy_inplace and ggml_cont_inplace (ggml/693)Timothy Cronin
2024-01-11Fix execlp call (ggml/689)Halalaluyafail3
2024-01-08SOTA 2-bit quants (#4773)Kawrakow
2024-01-05ggml : do not sched_yield when calling BLAS (#4761)Georgi Gerganov
2024-01-03ggml : extend ggml_get_rows, ggml_repeat, ggml_concat (ggml/639)Guillaume Wenzek
2023-12-30ggml : add ggml_cpu_has_avx_vnni() (#4589)automaticcat
2023-12-29ggml : fix some mul mat cases + add tests for src1 F16 (ggml/669)bssrdf
2023-12-26cuda : fix vmm pool with multi GPU (#4620)slaren
2023-12-26Update comment for AdamW implementation reference. (#4604)WillCorticesAI
2023-12-24cuda : improve cuda pool efficiency using virtual memory (#4606)slaren
2023-12-22llama : fix platforms without mmap (#4578)slaren
2023-12-22ggml : add comment about backward GGML_OP_DIAG_MASK_INF (#4203)Herman Semenov
2023-12-21ggml : change ggml_scale to take a float instead of tensor (#4573)Georgi Gerganov
2023-12-21llama : initial ggml-backend integration (#4520)slaren
2023-12-18llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)Ebey Abraham
2023-12-15ggml : group mul_mat_id rows by matrix (cpu only) (#4480)slaren
2023-12-14ggml : use ggml_row_size where possible (#4472)slaren
2023-12-14ggml : remove n_dims from ggml_tensor (#4469)slaren
2023-12-14ggml : add ggml_row_size() (fixes llama out of space) (#4461)LostRuins