summaryrefslogtreecommitdiff
path: root/ggml.c
AgeCommit message (Expand)Author
2024-05-18android : use "ci-android" branch for CI (#7341)Georgi Gerganov
2024-05-17ggml : rewrite silu and softmax for cpu (#7154)Justine Tunney
2024-05-15ggml : use dynamic thread scheduling for matrix multiplication (#6915)kunnis
2024-05-15ggml : tag ggml_tensor::backend as deprecated (#7290)slaren
2024-05-15ggml : add `ggml_upscale_ext` (ggml/814)John Balis
2024-05-14metal : support FA without mask + add asserts (#7278)Georgi Gerganov
2024-05-14ggml : try fix ppc64 (whisper/0)Georgi Gerganov
2024-05-11ggml : resolve merge (ggml/0)Georgi Gerganov
2024-05-11feat: implemented sigmoid function (ggml/806)Justina Cho
2024-05-11ggml : full ALiBi support (#7192)Georgi Gerganov
2024-05-08ggml : introduce bfloat16 support (#6412)Justine Tunney
2024-05-04gguf-split: add --no-tensor-first-split (#7072)Xuan Son Nguyen
2024-04-30ggml : add Flash Attention (#5021)Georgi Gerganov
2024-04-28gguf : enforce that tensor names are unique (#6905)Xuan Son Nguyen
2024-04-26gguf : fix mismatch between alloc and free functions (#6929)slaren
2024-04-26Merge pull request from GHSA-p5mv-gjc5-mwqvGeorgi Gerganov
2024-04-25ggml : fix redefinition of vaddvq_f32 for 32-bit ARM (#6906)Georgi Gerganov
2024-04-22llamafile : improve sgemm.cpp (#6796)Justine Tunney
2024-04-18ggml : group all experts in a single ggml_mul_mat_id (#6505)slaren
2024-04-16ggml : fix llamafile sgemm wdata offsets (#6710)Georgi Gerganov
2024-04-16ggml : add llamafile sgemm (#6414)Justine Tunney
2024-04-12metal : unify mul_mv_id kernels (#6556)slaren
2024-04-12llama : add gguf_remove_key + remove split meta during quantize (#6591)jiez
2024-04-09llama : add Command R Plus support (#6491)Carolinabanana
2024-04-03ggml : mul_mat_id use the same tensor for all the experts (#6387)slaren
2024-03-29Vulkan k-quant mmq and ggml-backend offload functionality (#6155)0cc4m
2024-03-27ggml : fix bounds checking of zero size views (#6347)slaren
2024-03-26llama : greatly reduce output buffer memory usage (#6122)compilade
2024-03-26IQ1_M: 1.75 bpw quantization (#6302)Kawrakow
2024-03-26cuda : rename build flag to LLAMA_CUDA (#6299)slaren
2024-03-24Fix heap corruption from wmode out-of-bound writes on windows (#6272)Rick G
2024-03-24[SYCL] offload op (#6217)Meng, Hengyu
2024-03-23use _wfopen instead of fopen on Windows (#6248)Jared Van Bortel
2024-03-18backend : offload large batches to GPU (#6083)slaren
2024-03-16ggml : add AVX512F SIMD (#6088)AmirAli Mirian
2024-03-15gguf : add support for I64 and F64 arrays (#6062)Ondřej Čertík
2024-03-13llama : add pipeline parallelism support (#6017)slaren
2024-03-11ggml, ci : Windows ARM runner and build fixes (#5979)Michael Podvitskiy
2024-03-09ggml : remove old quantization functions (#5942)Georgi Gerganov
2024-03-08llama : support Mamba Selective State Space Models (#5328)compilade
2024-03-06ggml : use SYS_get_cpu if SYS_getcpu is not defined (#5906)Jared Van Bortel
2024-03-04ggml : fix unknown status (#0)Georgi Gerganov
2024-03-04ggml : introduce ggml_status (ggml/750)Michael Podvitskiy
2024-03-04add some new ops, fix some operators and add batch operations to certain oper...leejet
2024-02-28add google magika inference example (ggml/748)slaren
2024-02-28Introduce backend GUIDs (ggml/743)UEXTM.com
2024-02-28ggml : make i-quants work with super-blocks of 64 (CPU,Metal) (#5760)Kawrakow
2024-02-27IQ4_XS: a 4.25 bpw quantization (#5747)Kawrakow
2024-02-26Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range...Kawrakow
2024-02-25code : normalize enum names (#5697)Georgi Gerganov