summaryrefslogtreecommitdiff
path: root/tests/test-backend-ops.cpp
AgeCommit message (Expand)Author
2024-08-27Faster Gemma2 (#27)Kawrakow
2024-08-12Merge mainline - Aug 12 2024 (#17)Kawrakow
2024-07-27Merge mainline llama.cpp (#3)Kawrakow
2024-06-17Add support for sqrt on CUDA (#7953)Calvin Laurenson
2024-06-12tests : add non-cont unary tests (#7857)Georgi Gerganov
2024-06-05ggml : refactor rope norm/neox (#7634)Georgi Gerganov
2024-06-01Fix FlashAttention debug test, FP32 assert (#7684)Johannes Gäßler
2024-06-01CUDA: quantized KV support for FA vec (#7527)Johannes Gäßler
2024-05-29ggml : fix YARN + add tests + add asserts (#7617)Georgi Gerganov
2024-05-29cuda : non-cont concat support (#7610)Georgi Gerganov
2024-05-28ggml : generalize GGML_OP_CONCAT (#7563)Georgi Gerganov
2024-05-22cuda : fix rope + add tests (#7452)Georgi Gerganov
2024-05-21llama : add phi3 128K model support (#7225)liuwei-git
2024-05-18ggml : fix quants nans when all the group weights are very close to zero (#7313)slaren
2024-05-15ggml : add `ggml_upscale_ext` (ggml/814)John Balis
2024-05-14metal : support FA without mask + add asserts (#7278)Georgi Gerganov
2024-05-12CUDA: add FP32 FlashAttention vector kernel (#7188)Johannes Gäßler
2024-05-11ggml : full ALiBi support (#7192)Georgi Gerganov
2024-05-09CUDA: generalize FP16 fattn vec kernel (#7061)Johannes Gäßler
2024-05-08ggml : introduce bfloat16 support (#6412)Justine Tunney
2024-04-30ggml : add Flash Attention (#5021)Georgi Gerganov
2024-04-18ggml : group all experts in a single ggml_mul_mat_id (#6505)slaren
2024-04-16llama : add qwen2moe (#6074)Shijie
2024-04-12metal : unify mul_mv_id kernels (#6556)slaren
2024-04-03ggml : mul_mat_id use the same tensor for all the experts (#6387)slaren
2024-03-26IQ1_M: 1.75 bpw quantization (#6302)Kawrakow
2024-03-22metal : pad n_ctx by 32 (#6177)Georgi Gerganov
2024-03-13test-backend-ops : skip CPU backend by default (#6028)slaren
2024-03-09ggml : remove old quantization functions (#5942)Georgi Gerganov
2024-03-04add some new ops, fix some operators and add batch operations to certain oper...leejet
2024-02-27IQ4_XS: a 4.25 bpw quantization (#5747)Kawrakow
2024-02-26Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range...Kawrakow
2024-02-25code : normalize enum names (#5697)Georgi Gerganov
2024-02-24IQ3_S: a much better alternative to Q3_K (#5676)Kawrakow
2024-02-21IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)Kawrakow
2024-02-181.5 bit quantization (#5453)Kawrakow
2024-02-17ggml : add ALiBi support for ggml_soft_max_ext (#5488)Georgi Gerganov
2024-02-13tests : disable moe test (#5473)Georgi Gerganov
2024-01-31llava : add MobileVLM support (#5132)JidongZhang-THU
2024-01-30`ggml_cuda_cpy` support for 4d tensors and float16->float32 upcasting (ggml/686)John Balis
2024-01-30SOTA 3-bit quants (#5196)Kawrakow
2024-01-29Nomic Vulkan backend (#4456)Jared Van Bortel
2024-01-28ggml : add unified SYCL backend for Intel GPUs (#2690)Abhilash Majumder
2024-01-27Remove unused data and add fixes (#5154)Michael Klimenko
2024-01-17ggml : add IQ2 to test-backend-ops + refactoring (#4990)Georgi Gerganov
2024-01-142-bit quantizations (#4897)Kawrakow
2024-01-12llama : ggml-backend integration (#4766)slaren
2024-01-09CUDA: faster softmax via shared memory + fp16 math (#4742)Johannes Gäßler
2024-01-04Print backend name on test-backend-ops failure (#4751)Johannes Gäßler
2024-01-03ggml : extend ggml_get_rows, ggml_repeat, ggml_concat (ggml/639)Guillaume Wenzek