summaryrefslogtreecommitdiff
path: root/ggml.h
AgeCommit message (Expand)Author
2024-04-18ggml : group all experts in a single ggml_mul_mat_id (#6505)slaren
2024-04-12llama : add gguf_remove_key + remove split meta during quantize (#6591)jiez
2024-04-09llama : add Command R Plus support (#6491)Carolinabanana
2024-04-03ggml : mul_mat_id use the same tensor for all the experts (#6387)slaren
2024-03-26llama : greatly reduce output buffer memory usage (#6122)compilade
2024-03-26IQ1_M: 1.75 bpw quantization (#6302)Kawrakow
2024-03-26cuda : rename build flag to LLAMA_CUDA (#6299)slaren
2024-03-23use _wfopen instead of fopen on Windows (#6248)Jared Van Bortel
2024-03-15gguf : add support for I64 and F64 arrays (#6062)Ondřej Čertík
2024-03-14ggml : designate enum vals for integer types (#6050)Georgi Gerganov
2024-03-09ggml : remove old quantization functions (#5942)Georgi Gerganov
2024-03-08llama : support Mamba Selective State Space Models (#5328)compilade
2024-03-04ggml : introduce ggml_status (ggml/750)Michael Podvitskiy
2024-03-04add some new ops, fix some operators and add batch operations to certain oper...leejet
2024-02-28Introduce backend GUIDs (ggml/743)UEXTM.com
2024-02-27IQ4_XS: a 4.25 bpw quantization (#5747)Kawrakow
2024-02-26Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range...Kawrakow
2024-02-25code : normalize enum names (#5697)Georgi Gerganov
2024-02-24IQ3_S: a much better alternative to Q3_K (#5676)Kawrakow
2024-02-22ggml : always define ggml_fp16_t as uint16_t (#5666)Georgi Gerganov
2024-02-21IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)Kawrakow
2024-02-181.5 bit quantization (#5453)Kawrakow
2024-02-17ggml : add ALiBi support for ggml_soft_max_ext (#5488)Georgi Gerganov
2024-02-16ggml : add numa options (#5377)bmwl
2024-02-12sync : ggml (#5452)Georgi Gerganov
2024-02-11ggml : add mmla kernels for quantized GEMM (#4966)snadampal
2024-02-10ggml : add abort_callback for cpu backend (ggml/725)Michael Podvitskiy
2024-01-31llava : add MobileVLM support (#5132)JidongZhang-THU
2024-01-30kompute : llama-bench support and ggml_cpu_has_kompute() (#5226)Jared Van Bortel
2024-01-30SOTA 3-bit quants (#5196)Kawrakow
2024-01-28ggml : add Vulkan backend (#2059)0cc4m
2024-01-28ggml : add unified SYCL backend for Intel GPUs (#2690)Abhilash Majumder
2024-01-23minor : clean-up some warnings and style (#5094)Georgi Gerganov
2024-01-22llava : MobileVLM support (#4954)XiaotaoChen
2024-01-17ggml : add IQ2 to test-backend-ops + refactoring (#4990)Georgi Gerganov
2024-01-17imatrix : offload to GPU support (#4957)Georgi Gerganov
2024-01-16ggml : introduce GGML_CALL function annotation (#4850)Justine Tunney
2024-01-142-bit quantizations (#4897)Kawrakow
2024-01-12llama : ggml-backend integration (#4766)slaren
2024-01-12Importance Matrix calculation (#4861)Kawrakow
2024-01-11ggml : SOTA 2-bit quants (add IQ2_XS) (#4856)Kawrakow
2024-01-11ggml : remove ggml_cpy_inplace and ggml_cont_inplace (ggml/693)Timothy Cronin
2024-01-11ggml : change GGML_MAX_NAME at compile time (ggml/682)leejet
2024-01-08SOTA 2-bit quants (#4773)Kawrakow
2023-12-30ggml : add ggml_cpu_has_avx_vnni() (#4589)automaticcat
2023-12-24cuda : improve cuda pool efficiency using virtual memory (#4606)slaren
2023-12-22ggml : extend `enum ggml_log_level` with `GGML_LOG_LEVEL_DEBUG` (#4579)bobqianic
2023-12-21ggml : change ggml_scale to take a float instead of tensor (#4573)Georgi Gerganov
2023-12-21llama : initial ggml-backend integration (#4520)slaren
2023-12-19ggml : fixed check for _MSC_VER (#4535)Eric Sommerlade