index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
tests
/
test-backend-ops.cpp
Age
Commit message (
Expand
)
Author
2024-08-27
Faster Gemma2 (#27)
Kawrakow
2024-08-12
Merge mainline - Aug 12 2024 (#17)
Kawrakow
2024-07-27
Merge mainline llama.cpp (#3)
Kawrakow
2024-06-17
Add support for sqrt on CUDA (#7953)
Calvin Laurenson
2024-06-12
tests : add non-cont unary tests (#7857)
Georgi Gerganov
2024-06-05
ggml : refactor rope norm/neox (#7634)
Georgi Gerganov
2024-06-01
Fix FlashAttention debug test, FP32 assert (#7684)
Johannes Gäßler
2024-06-01
CUDA: quantized KV support for FA vec (#7527)
Johannes Gäßler
2024-05-29
ggml : fix YARN + add tests + add asserts (#7617)
Georgi Gerganov
2024-05-29
cuda : non-cont concat support (#7610)
Georgi Gerganov
2024-05-28
ggml : generalize GGML_OP_CONCAT (#7563)
Georgi Gerganov
2024-05-22
cuda : fix rope + add tests (#7452)
Georgi Gerganov
2024-05-21
llama : add phi3 128K model support (#7225)
liuwei-git
2024-05-18
ggml : fix quants nans when all the group weights are very close to zero (#7313)
slaren
2024-05-15
ggml : add `ggml_upscale_ext` (ggml/814)
John Balis
2024-05-14
metal : support FA without mask + add asserts (#7278)
Georgi Gerganov
2024-05-12
CUDA: add FP32 FlashAttention vector kernel (#7188)
Johannes Gäßler
2024-05-11
ggml : full ALiBi support (#7192)
Georgi Gerganov
2024-05-09
CUDA: generalize FP16 fattn vec kernel (#7061)
Johannes Gäßler
2024-05-08
ggml : introduce bfloat16 support (#6412)
Justine Tunney
2024-04-30
ggml : add Flash Attention (#5021)
Georgi Gerganov
2024-04-18
ggml : group all experts in a single ggml_mul_mat_id (#6505)
slaren
2024-04-16
llama : add qwen2moe (#6074)
Shijie
2024-04-12
metal : unify mul_mv_id kernels (#6556)
slaren
2024-04-03
ggml : mul_mat_id use the same tensor for all the experts (#6387)
slaren
2024-03-26
IQ1_M: 1.75 bpw quantization (#6302)
Kawrakow
2024-03-22
metal : pad n_ctx by 32 (#6177)
Georgi Gerganov
2024-03-13
test-backend-ops : skip CPU backend by default (#6028)
slaren
2024-03-09
ggml : remove old quantization functions (#5942)
Georgi Gerganov
2024-03-04
add some new ops, fix some operators and add batch operations to certain oper...
leejet
2024-02-27
IQ4_XS: a 4.25 bpw quantization (#5747)
Kawrakow
2024-02-26
Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range...
Kawrakow
2024-02-25
code : normalize enum names (#5697)
Georgi Gerganov
2024-02-24
IQ3_S: a much better alternative to Q3_K (#5676)
Kawrakow
2024-02-21
IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)
Kawrakow
2024-02-18
1.5 bit quantization (#5453)
Kawrakow
2024-02-17
ggml : add ALiBi support for ggml_soft_max_ext (#5488)
Georgi Gerganov
2024-02-13
tests : disable moe test (#5473)
Georgi Gerganov
2024-01-31
llava : add MobileVLM support (#5132)
JidongZhang-THU
2024-01-30
`ggml_cuda_cpy` support for 4d tensors and float16->float32 upcasting (ggml/686)
John Balis
2024-01-30
SOTA 3-bit quants (#5196)
Kawrakow
2024-01-29
Nomic Vulkan backend (#4456)
Jared Van Bortel
2024-01-28
ggml : add unified SYCL backend for Intel GPUs (#2690)
Abhilash Majumder
2024-01-27
Remove unused data and add fixes (#5154)
Michael Klimenko
2024-01-17
ggml : add IQ2 to test-backend-ops + refactoring (#4990)
Georgi Gerganov
2024-01-14
2-bit quantizations (#4897)
Kawrakow
2024-01-12
llama : ggml-backend integration (#4766)
slaren
2024-01-09
CUDA: faster softmax via shared memory + fp16 math (#4742)
Johannes Gäßler
2024-01-04
Print backend name on test-backend-ops failure (#4751)
Johannes Gäßler
2024-01-03
ggml : extend ggml_get_rows, ggml_repeat, ggml_concat (ggml/639)
Guillaume Wenzek
[next]