index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml.c
Age
Commit message (
Expand
)
Author
2023-12-29
ggml : fix some mul mat cases + add tests for src1 F16 (ggml/669)
bssrdf
2023-12-26
cuda : fix vmm pool with multi GPU (#4620)
slaren
2023-12-26
Update comment for AdamW implementation reference. (#4604)
WillCorticesAI
2023-12-24
cuda : improve cuda pool efficiency using virtual memory (#4606)
slaren
2023-12-22
llama : fix platforms without mmap (#4578)
slaren
2023-12-22
ggml : add comment about backward GGML_OP_DIAG_MASK_INF (#4203)
Herman Semenov
2023-12-21
ggml : change ggml_scale to take a float instead of tensor (#4573)
Georgi Gerganov
2023-12-21
llama : initial ggml-backend integration (#4520)
slaren
2023-12-18
llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)
Ebey Abraham
2023-12-15
ggml : group mul_mat_id rows by matrix (cpu only) (#4480)
slaren
2023-12-14
ggml : use ggml_row_size where possible (#4472)
slaren
2023-12-14
ggml : remove n_dims from ggml_tensor (#4469)
slaren
2023-12-14
ggml : add ggml_row_size() (fixes llama out of space) (#4461)
LostRuins
2023-12-14
ggml : fix OpenCL broadcast requirement for ggml_mul (close #4453)
Georgi Gerganov
2023-12-13
sync : ggml (SD ops, tests, kernels) (#4444)
Georgi Gerganov
2023-12-13
llama : add Mixtral support (#4406)
slaren
2023-12-12
english : use `typos` to fix comments and logs (#4354)
Richard Kiss
2023-12-07
sync : ggml (new ops, tests, backend, etc.) (#4359)
Georgi Gerganov
2023-12-03
ggml : reuse ggml_get_n_tasks() in ggml_graph_plan() (#4308)
Georgi Gerganov
2023-12-03
ggml : fix soft max out-of-bounds access (#4307)
Georgi Gerganov
2023-12-01
ggml : add ggml_soft_max_ext (#4256)
Georgi Gerganov
2023-11-28
ggml : re-enable BLAS for CPU when src0 != F32 + remove redundant full offloa...
Georgi Gerganov
2023-11-26
ggml : fix -Warray-bounds warning with gcc (#4231)
Jared Van Bortel
2023-11-17
llama : add functions to get the model's metadata (#4013)
slaren
2023-11-17
finetune : speed-up ggml_compute_forward_out_prod_f32 via BLAS (#4079)
gwjr
2023-11-16
gguf : fix potential infinite loops while parsing (#4100)
texmex76
2023-11-13
ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060)
Georgi Gerganov
2023-11-13
sync : ggml (backend v2) (#3912)
Georgi Gerganov
2023-11-07
ggml : fix backward rope after YaRN (#3974)
xaedes
2023-11-02
gguf : print error for GGUFv1 files (#3908)
Georgi Gerganov
2023-11-02
gguf : remove special-case code for GGUFv1 (#3901)
Georgi Gerganov
2023-11-01
llama : implement YaRN RoPE scaling (#2268)
cebtenzzre
2023-11-01
finetune : add -ngl parameter (#3762)
Andrew Godfrey
2023-10-30
ggml : move FP16 <-> FP32 code to ggml-impl.h (#3861)
Georgi Gerganov
2023-10-29
ggml : quantization refactoring (#3833)
Georgi Gerganov
2023-10-24
sync : ggml (conv ops + cuda MSVC fixes) (#3765)
Georgi Gerganov
2023-10-24
cuda : add batched cuBLAS GEMM for faster attention (#3749)
Georgi Gerganov
2023-10-20
gguf : support big endian platform (#3552)
Qin Yue Chen
2023-10-20
ggml : fix rope + llama minor optimizations (#3560)
Herman Semenov
2023-10-13
ggml : add context enumeration functions (#3605)
slaren
2023-10-12
examples: support LLaVA v1.5 (multimodal model) (#3436)
M. Yusuf Sarıgöz
2023-10-10
llm : add MPT support (#3417)
Jan Ploski
2023-10-09
refact : fix convert script + zero out KV cache to avoid nans (#3523)
Georgi Gerganov
2023-10-08
sync : ggml (ggml-backend) (#3548)
Georgi Gerganov
2023-10-04
ggml : fix build after #3329
Georgi Gerganov
2023-10-04
llm : add Refact model (#3329)
ds5t5
2023-10-04
sync : ggml (conv 1d + 2d updates, UB fixes) (#3468)
Georgi Gerganov
2023-10-03
ggml : add RISC-V Vector Support for K-Quants and improved the existing intri...
Tameem
2023-10-02
CLBlast: Add broadcast support for matrix multiplication (#3402)
shibe2
2023-09-28
build : enable more non-default compiler warnings (#3200)
Cebtenzzre
[next]