index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
tests
/
test-backend-ops.cpp
Age
Commit message (
Expand
)
Author
2024-01-17
ggml : add IQ2 to test-backend-ops + refactoring (#4990)
Georgi Gerganov
2024-01-14
2-bit quantizations (#4897)
Kawrakow
2024-01-12
llama : ggml-backend integration (#4766)
slaren
2024-01-09
CUDA: faster softmax via shared memory + fp16 math (#4742)
Johannes Gäßler
2024-01-04
Print backend name on test-backend-ops failure (#4751)
Johannes Gäßler
2024-01-03
ggml : extend ggml_get_rows, ggml_repeat, ggml_concat (ggml/639)
Guillaume Wenzek
2024-01-02
metal : enable shader debugging (cmake option) (#4705)
Georgi Gerganov
2023-12-29
ggml : fix some mul mat cases + add tests for src1 F16 (ggml/669)
bssrdf
2023-12-21
ggml : change ggml_scale to take a float instead of tensor (#4573)
Georgi Gerganov
2023-12-18
llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)
Ebey Abraham
2023-12-14
ggml : use ggml_row_size where possible (#4472)
slaren
2023-12-13
sync : ggml (SD ops, tests, kernels) (#4444)
Georgi Gerganov
2023-12-13
llama : add Mixtral support (#4406)
slaren
2023-12-07
sync : ggml (new ops, tests, backend, etc.) (#4359)
Georgi Gerganov