index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml.c
Age
Commit message (
Expand
)
Author
2024-02-05
ggml : avoid duplicating function calls using MIN/MAX macros (#5325)
Dr. Tom Murphy VII Ph.D
2024-01-31
llava : add MobileVLM support (#5132)
JidongZhang-THU
2024-01-31
ggml : limit n_threads to the max n_tasks (#5238)
slaren
2024-01-30
kompute : llama-bench support and ggml_cpu_has_kompute() (#5226)
Jared Van Bortel
2024-01-30
gguf : fix comparison (ggml/715)
Georgi Gerganov
2024-01-30
gguf : add input validation, prevent integer overflows (ggml/709)
Georgi Gerganov
2024-01-30
SOTA 3-bit quants (#5196)
Kawrakow
2024-01-28
ggml : minor type fix (int64_t -> size_t)
Georgi Gerganov
2024-01-28
ggml : add Vulkan backend (#2059)
0cc4m
2024-01-28
ggml : add unified SYCL backend for Intel GPUs (#2690)
Abhilash Majumder
2024-01-27
ggml : check ggml_add src1 type (ggml/708)
Judd
2024-01-26
Add OpenCL add kernel (#5151)
0cc4m
2024-01-26
ggml : update softmax n_task calculation (#5126)
snadampal
2024-01-23
minor : clean-up some warnings and style (#5094)
Georgi Gerganov
2024-01-22
ggml : parallelize FP32 conversion when using BLAS (#5045)
Reinforce-II
2024-01-22
llava : MobileVLM support (#4954)
XiaotaoChen
2024-01-17
ggml : add IQ2 to test-backend-ops + refactoring (#4990)
Georgi Gerganov
2024-01-17
imatrix : offload to GPU support (#4957)
Georgi Gerganov
2024-01-16
ggml : importance matrix support for legacy quants (#4969)
Kawrakow
2024-01-16
ggml : introduce GGML_CALL function annotation (#4850)
Justine Tunney
2024-01-14
Add ability to use importance matrix for all k-quants (#4930)
Kawrakow
2024-01-14
2-bit quantizations (#4897)
Kawrakow
2024-01-13
ggml: cache sin/cos for RoPE (#4908)
Johannes Gäßler
2024-01-13
gguf : fix potential infinite for-loop (#4600)
texmex76
2024-01-12
llama : ggml-backend integration (#4766)
slaren
2024-01-12
Importance Matrix calculation (#4861)
Kawrakow
2024-01-11
ggml : SOTA 2-bit quants (add IQ2_XS) (#4856)
Kawrakow
2024-01-11
ggml : remove ggml_cpy_inplace and ggml_cont_inplace (ggml/693)
Timothy Cronin
2024-01-11
Fix execlp call (ggml/689)
Halalaluyafail3
2024-01-08
SOTA 2-bit quants (#4773)
Kawrakow
2024-01-05
ggml : do not sched_yield when calling BLAS (#4761)
Georgi Gerganov
2024-01-03
ggml : extend ggml_get_rows, ggml_repeat, ggml_concat (ggml/639)
Guillaume Wenzek
2023-12-30
ggml : add ggml_cpu_has_avx_vnni() (#4589)
automaticcat
2023-12-29
ggml : fix some mul mat cases + add tests for src1 F16 (ggml/669)
bssrdf
2023-12-26
cuda : fix vmm pool with multi GPU (#4620)
slaren
2023-12-26
Update comment for AdamW implementation reference. (#4604)
WillCorticesAI
2023-12-24
cuda : improve cuda pool efficiency using virtual memory (#4606)
slaren
2023-12-22
llama : fix platforms without mmap (#4578)
slaren
2023-12-22
ggml : add comment about backward GGML_OP_DIAG_MASK_INF (#4203)
Herman Semenov
2023-12-21
ggml : change ggml_scale to take a float instead of tensor (#4573)
Georgi Gerganov
2023-12-21
llama : initial ggml-backend integration (#4520)
slaren
2023-12-18
llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)
Ebey Abraham
2023-12-15
ggml : group mul_mat_id rows by matrix (cpu only) (#4480)
slaren
2023-12-14
ggml : use ggml_row_size where possible (#4472)
slaren
2023-12-14
ggml : remove n_dims from ggml_tensor (#4469)
slaren
2023-12-14
ggml : add ggml_row_size() (fixes llama out of space) (#4461)
LostRuins
2023-12-14
ggml : fix OpenCL broadcast requirement for ggml_mul (close #4453)
Georgi Gerganov
2023-12-13
sync : ggml (SD ops, tests, kernels) (#4444)
Georgi Gerganov
2023-12-13
llama : add Mixtral support (#4406)
slaren
2023-12-12
english : use `typos` to fix comments and logs (#4354)
Richard Kiss
[next]