index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml.c
Age
Commit message (
Expand
)
Author
2024-06-13
move BLAS to a separate backend (#6210)
slaren
2024-06-12
tests : add non-cont unary tests (#7857)
Georgi Gerganov
2024-06-12
ggml : improve ggml_is_contiguous logic (#7856)
Georgi Gerganov
2024-06-05
ggml : refactor rope norm/neox (#7634)
Georgi Gerganov
2024-06-04
ggml : remove OpenCL (#7735)
Georgi Gerganov
2024-06-04
ggml : prevent builds with -ffinite-math-only (#7726)
Georgi Gerganov
2024-06-03
ggml : use OpenMP as a thread pool (#7606)
Masaya, Kato
2024-05-31
ggml : fix loongson compile warnings (#7537)
Georgi Gerganov
2024-05-30
faster avx512 exp implementation (#7551)
Chris Elrod
2024-05-30
ggml : fix loongarch build (O2 issue) (#7636)
junchao-loongson
2024-05-29
ggml : fix YARN + add tests + add asserts (#7617)
Georgi Gerganov
2024-05-29
llama-bench : add support for the RPC backend (#7435)
Radoslav Gerganov
2024-05-29
ggml : use atomic_flag for critical section (#7598)
slaren
2024-05-29
ggml : restore ggml_rope_xpos_inplace (ggml/0)
Georgi Gerganov
2024-05-29
ggml : fix typo in ggml.c (#7603)
zhouwg
2024-05-28
ggml : generalize GGML_OP_CONCAT (#7563)
Georgi Gerganov
2024-05-25
ggml: aarch64: SVE kernels for q8_0_q8_0, q4_0_q8_0 vector dot (#7433)
Masaya, Kato
2024-05-23
ggml : remove ggml_flash_attn and ggml_flash_ff (#7463)
Georgi Gerganov
2024-05-23
ggml : drop support for QK_K=64 (#7473)
Georgi Gerganov
2024-05-22
cuda : fix rope + add tests (#7452)
Georgi Gerganov
2024-05-21
llama : add phi3 128K model support (#7225)
liuwei-git
2024-05-20
ggml : add loongarch lsx and lasx support (#6454)
junchao-loongson
2024-05-20
Add provisions for windows support for BF16 code including CMake provision fo...
Srihari-mcw
2024-05-19
ggml: implement quantized KV cache for FA (#7372)
Johannes Gäßler
2024-05-18
android : use "ci-android" branch for CI (#7341)
Georgi Gerganov
2024-05-17
ggml : rewrite silu and softmax for cpu (#7154)
Justine Tunney
2024-05-15
ggml : use dynamic thread scheduling for matrix multiplication (#6915)
kunnis
2024-05-15
ggml : tag ggml_tensor::backend as deprecated (#7290)
slaren
2024-05-15
ggml : add `ggml_upscale_ext` (ggml/814)
John Balis
2024-05-14
metal : support FA without mask + add asserts (#7278)
Georgi Gerganov
2024-05-14
ggml : try fix ppc64 (whisper/0)
Georgi Gerganov
2024-05-11
ggml : resolve merge (ggml/0)
Georgi Gerganov
2024-05-11
feat: implemented sigmoid function (ggml/806)
Justina Cho
2024-05-11
ggml : full ALiBi support (#7192)
Georgi Gerganov
2024-05-08
ggml : introduce bfloat16 support (#6412)
Justine Tunney
2024-05-04
gguf-split: add --no-tensor-first-split (#7072)
Xuan Son Nguyen
2024-04-30
ggml : add Flash Attention (#5021)
Georgi Gerganov
2024-04-28
gguf : enforce that tensor names are unique (#6905)
Xuan Son Nguyen
2024-04-26
gguf : fix mismatch between alloc and free functions (#6929)
slaren
2024-04-26
Merge pull request from GHSA-p5mv-gjc5-mwqv
Georgi Gerganov
2024-04-25
ggml : fix redefinition of vaddvq_f32 for 32-bit ARM (#6906)
Georgi Gerganov
2024-04-22
llamafile : improve sgemm.cpp (#6796)
Justine Tunney
2024-04-18
ggml : group all experts in a single ggml_mul_mat_id (#6505)
slaren
2024-04-16
ggml : fix llamafile sgemm wdata offsets (#6710)
Georgi Gerganov
2024-04-16
ggml : add llamafile sgemm (#6414)
Justine Tunney
2024-04-12
metal : unify mul_mv_id kernels (#6556)
slaren
2024-04-12
llama : add gguf_remove_key + remove split meta during quantize (#6591)
jiez
2024-04-09
llama : add Command R Plus support (#6491)
Carolinabanana
2024-04-03
ggml : mul_mat_id use the same tensor for all the experts (#6387)
slaren
2024-03-29
Vulkan k-quant mmq and ggml-backend offload functionality (#6155)
0cc4m
[next]