index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml.h
Age
Commit message (
Expand
)
Author
2024-01-31
llava : add MobileVLM support (#5132)
JidongZhang-THU
2024-01-30
kompute : llama-bench support and ggml_cpu_has_kompute() (#5226)
Jared Van Bortel
2024-01-30
SOTA 3-bit quants (#5196)
Kawrakow
2024-01-28
ggml : add Vulkan backend (#2059)
0cc4m
2024-01-28
ggml : add unified SYCL backend for Intel GPUs (#2690)
Abhilash Majumder
2024-01-23
minor : clean-up some warnings and style (#5094)
Georgi Gerganov
2024-01-22
llava : MobileVLM support (#4954)
XiaotaoChen
2024-01-17
ggml : add IQ2 to test-backend-ops + refactoring (#4990)
Georgi Gerganov
2024-01-17
imatrix : offload to GPU support (#4957)
Georgi Gerganov
2024-01-16
ggml : introduce GGML_CALL function annotation (#4850)
Justine Tunney
2024-01-14
2-bit quantizations (#4897)
Kawrakow
2024-01-12
llama : ggml-backend integration (#4766)
slaren
2024-01-12
Importance Matrix calculation (#4861)
Kawrakow
2024-01-11
ggml : SOTA 2-bit quants (add IQ2_XS) (#4856)
Kawrakow
2024-01-11
ggml : remove ggml_cpy_inplace and ggml_cont_inplace (ggml/693)
Timothy Cronin
2024-01-11
ggml : change GGML_MAX_NAME at compile time (ggml/682)
leejet
2024-01-08
SOTA 2-bit quants (#4773)
Kawrakow
2023-12-30
ggml : add ggml_cpu_has_avx_vnni() (#4589)
automaticcat
2023-12-24
cuda : improve cuda pool efficiency using virtual memory (#4606)
slaren
2023-12-22
ggml : extend `enum ggml_log_level` with `GGML_LOG_LEVEL_DEBUG` (#4579)
bobqianic
2023-12-21
ggml : change ggml_scale to take a float instead of tensor (#4573)
Georgi Gerganov
2023-12-21
llama : initial ggml-backend integration (#4520)
slaren
2023-12-19
ggml : fixed check for _MSC_VER (#4535)
Eric Sommerlade
2023-12-18
llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)
Ebey Abraham
2023-12-14
ggml : use ggml_row_size where possible (#4472)
slaren
2023-12-14
ggml : remove n_dims from ggml_tensor (#4469)
slaren
2023-12-14
ggml : add ggml_row_size() (fixes llama out of space) (#4461)
LostRuins
2023-12-13
sync : ggml (SD ops, tests, kernels) (#4444)
Georgi Gerganov
2023-12-13
llama : add Mixtral support (#4406)
slaren
2023-12-12
ggml : increased GGML_MAX_PARAMS to allow finetuning of 70b models (#4424)
Taikono-Himazin
2023-12-07
sync : ggml (new ops, tests, backend, etc.) (#4359)
Georgi Gerganov
2023-12-01
ggml : add ggml_soft_max_ext (#4256)
Georgi Gerganov
2023-11-28
ggml : restore abort() in GGML_ASSERT (#4242)
Jared Van Bortel
2023-11-17
llama : add functions to get the model's metadata (#4013)
slaren
2023-11-13
ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060)
Georgi Gerganov
2023-11-13
sync : ggml (backend v2) (#3912)
Georgi Gerganov
2023-11-07
ggml : fix backward rope after YaRN (#3974)
xaedes
2023-11-01
llama : implement YaRN RoPE scaling (#2268)
cebtenzzre
2023-11-01
llama : refactor graph build code (#3837)
Georgi Gerganov
2023-10-29
ggml : quantization refactoring (#3833)
Georgi Gerganov
2023-10-24
sync : ggml (conv ops + cuda MSVC fixes) (#3765)
Georgi Gerganov
2023-10-20
gguf : support big endian platform (#3552)
Qin Yue Chen
2023-10-13
ggml : add context enumeration functions (#3605)
slaren
2023-10-08
sync : ggml (ggml-backend) (#3548)
Georgi Gerganov
2023-10-04
sync : ggml (conv 1d + 2d updates, UB fixes) (#3468)
Georgi Gerganov
2023-09-28
build : enable more non-default compiler warnings (#3200)
Cebtenzzre
2023-09-28
ggml_tensor: update the structure comments. (#3283)
Hua Jiang
2023-09-28
train : finetune LORA (#2632)
xaedes
2023-09-28
gguf : basic type checking in gguf_get_* (#3346)
Cebtenzzre
2023-09-28
llama : custom attention mask + parallel decoding + no context swaps (#3228)
Georgi Gerganov
[next]