summaryrefslogtreecommitdiff
path: root/ggml.c
AgeCommit message (Expand)Author
2024-03-13llama : add pipeline parallelism support (#6017)slaren
2024-03-11ggml, ci : Windows ARM runner and build fixes (#5979)Michael Podvitskiy
2024-03-09ggml : remove old quantization functions (#5942)Georgi Gerganov
2024-03-08llama : support Mamba Selective State Space Models (#5328)compilade
2024-03-06ggml : use SYS_get_cpu if SYS_getcpu is not defined (#5906)Jared Van Bortel
2024-03-04ggml : fix unknown status (#0)Georgi Gerganov
2024-03-04ggml : introduce ggml_status (ggml/750)Michael Podvitskiy
2024-03-04add some new ops, fix some operators and add batch operations to certain oper...leejet
2024-02-28add google magika inference example (ggml/748)slaren
2024-02-28Introduce backend GUIDs (ggml/743)UEXTM.com
2024-02-28ggml : make i-quants work with super-blocks of 64 (CPU,Metal) (#5760)Kawrakow
2024-02-27IQ4_XS: a 4.25 bpw quantization (#5747)Kawrakow
2024-02-26Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range...Kawrakow
2024-02-25code : normalize enum names (#5697)Georgi Gerganov
2024-02-24IQ3_S: a much better alternative to Q3_K (#5676)Kawrakow
2024-02-22ggml : always define ggml_fp16_t as uint16_t (#5666)Georgi Gerganov
2024-02-21sync : ggml (#5633)Georgi Gerganov
2024-02-21IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)Kawrakow
2024-02-19Allow for Vulkan build with Accelerate.Mathijs de Bruin
2024-02-19ggml : android and old glibc NUMA incompatibility bugfixes (#5557)bmwl
2024-02-18ggml, common, examples, tests : fixed type arguments in printf (#5528)Herman Semenov
2024-02-181.5 bit quantization (#5453)Kawrakow
2024-02-17ggml : add ALiBi support for ggml_soft_max_ext (#5488)Georgi Gerganov
2024-02-17ci : add an option to fail on compile warning (#3952)Ananta Bastola
2024-02-16ggml : add numa options (#5377)bmwl
2024-02-12sync : ggml (#5452)Georgi Gerganov
2024-02-11ggml : add mmla kernels for quantized GEMM (#4966)snadampal
2024-02-10ggml : add abort_callback for cpu backend (ggml/725)Michael Podvitskiy
2024-02-07Basic Vulkan Multi-GPU implementation (#5321)0cc4m
2024-02-05ggml : avoid duplicating function calls using MIN/MAX macros (#5325)Dr. Tom Murphy VII Ph.D
2024-01-31llava : add MobileVLM support (#5132)JidongZhang-THU
2024-01-31ggml : limit n_threads to the max n_tasks (#5238)slaren
2024-01-30kompute : llama-bench support and ggml_cpu_has_kompute() (#5226)Jared Van Bortel
2024-01-30gguf : fix comparison (ggml/715)Georgi Gerganov
2024-01-30gguf : add input validation, prevent integer overflows (ggml/709)Georgi Gerganov
2024-01-30SOTA 3-bit quants (#5196)Kawrakow
2024-01-28ggml : minor type fix (int64_t -> size_t)Georgi Gerganov
2024-01-28ggml : add Vulkan backend (#2059)0cc4m
2024-01-28ggml : add unified SYCL backend for Intel GPUs (#2690)Abhilash Majumder
2024-01-27ggml : check ggml_add src1 type (ggml/708)Judd
2024-01-26Add OpenCL add kernel (#5151)0cc4m
2024-01-26ggml : update softmax n_task calculation (#5126)snadampal
2024-01-23minor : clean-up some warnings and style (#5094)Georgi Gerganov
2024-01-22ggml : parallelize FP32 conversion when using BLAS (#5045)Reinforce-II
2024-01-22llava : MobileVLM support (#4954)XiaotaoChen
2024-01-17ggml : add IQ2 to test-backend-ops + refactoring (#4990)Georgi Gerganov
2024-01-17imatrix : offload to GPU support (#4957)Georgi Gerganov
2024-01-16ggml : importance matrix support for legacy quants (#4969)Kawrakow
2024-01-16ggml : introduce GGML_CALL function annotation (#4850)Justine Tunney
2024-01-14Add ability to use importance matrix for all k-quants (#4930)Kawrakow