index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml.h
Age
Commit message (
Expand
)
Author
2024-05-23
ggml : remove ggml_flash_attn and ggml_flash_ff (#7463)
Georgi Gerganov
2024-05-22
cuda : fix rope + add tests (#7452)
Georgi Gerganov
2024-05-21
llama : add phi3 128K model support (#7225)
liuwei-git
2024-05-20
Add provisions for windows support for BF16 code including CMake provision fo...
Srihari-mcw
2024-05-15
ggml : tag ggml_tensor::backend as deprecated (#7290)
slaren
2024-05-15
ggml : add `ggml_upscale_ext` (ggml/814)
John Balis
2024-05-14
metal : support FA without mask + add asserts (#7278)
Georgi Gerganov
2024-05-11
feat: implemented sigmoid function (ggml/806)
Justina Cho
2024-05-11
ggml : full ALiBi support (#7192)
Georgi Gerganov
2024-05-08
ggml : introduce bfloat16 support (#6412)
Justine Tunney
2024-04-30
ggml : add Flash Attention (#5021)
Georgi Gerganov
2024-04-26
add basic tensor data validation function (#6884)
slaren
2024-04-18
ggml : group all experts in a single ggml_mul_mat_id (#6505)
slaren
2024-04-12
llama : add gguf_remove_key + remove split meta during quantize (#6591)
jiez
2024-04-09
llama : add Command R Plus support (#6491)
Carolinabanana
2024-04-03
ggml : mul_mat_id use the same tensor for all the experts (#6387)
slaren
2024-03-26
llama : greatly reduce output buffer memory usage (#6122)
compilade
2024-03-26
IQ1_M: 1.75 bpw quantization (#6302)
Kawrakow
2024-03-26
cuda : rename build flag to LLAMA_CUDA (#6299)
slaren
2024-03-23
use _wfopen instead of fopen on Windows (#6248)
Jared Van Bortel
2024-03-15
gguf : add support for I64 and F64 arrays (#6062)
Ondřej Čertík
2024-03-14
ggml : designate enum vals for integer types (#6050)
Georgi Gerganov
2024-03-09
ggml : remove old quantization functions (#5942)
Georgi Gerganov
2024-03-08
llama : support Mamba Selective State Space Models (#5328)
compilade
2024-03-04
ggml : introduce ggml_status (ggml/750)
Michael Podvitskiy
2024-03-04
add some new ops, fix some operators and add batch operations to certain oper...
leejet
2024-02-28
Introduce backend GUIDs (ggml/743)
UEXTM.com
2024-02-27
IQ4_XS: a 4.25 bpw quantization (#5747)
Kawrakow
2024-02-26
Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range...
Kawrakow
2024-02-25
code : normalize enum names (#5697)
Georgi Gerganov
2024-02-24
IQ3_S: a much better alternative to Q3_K (#5676)
Kawrakow
2024-02-22
ggml : always define ggml_fp16_t as uint16_t (#5666)
Georgi Gerganov
2024-02-21
IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)
Kawrakow
2024-02-18
1.5 bit quantization (#5453)
Kawrakow
2024-02-17
ggml : add ALiBi support for ggml_soft_max_ext (#5488)
Georgi Gerganov
2024-02-16
ggml : add numa options (#5377)
bmwl
2024-02-12
sync : ggml (#5452)
Georgi Gerganov
2024-02-11
ggml : add mmla kernels for quantized GEMM (#4966)
snadampal
2024-02-10
ggml : add abort_callback for cpu backend (ggml/725)
Michael Podvitskiy
2024-01-31
llava : add MobileVLM support (#5132)
JidongZhang-THU
2024-01-30
kompute : llama-bench support and ggml_cpu_has_kompute() (#5226)
Jared Van Bortel
2024-01-30
SOTA 3-bit quants (#5196)
Kawrakow
2024-01-28
ggml : add Vulkan backend (#2059)
0cc4m
2024-01-28
ggml : add unified SYCL backend for Intel GPUs (#2690)
Abhilash Majumder
2024-01-23
minor : clean-up some warnings and style (#5094)
Georgi Gerganov
2024-01-22
llava : MobileVLM support (#4954)
XiaotaoChen
2024-01-17
ggml : add IQ2 to test-backend-ops + refactoring (#4990)
Georgi Gerganov
2024-01-17
imatrix : offload to GPU support (#4957)
Georgi Gerganov
2024-01-16
ggml : introduce GGML_CALL function annotation (#4850)
Justine Tunney
2024-01-14
2-bit quantizations (#4897)
Kawrakow
[next]