index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml.c
Age
Commit message (
Expand
)
Author
2024-03-13
llama : add pipeline parallelism support (#6017)
slaren
2024-03-11
ggml, ci : Windows ARM runner and build fixes (#5979)
Michael Podvitskiy
2024-03-09
ggml : remove old quantization functions (#5942)
Georgi Gerganov
2024-03-08
llama : support Mamba Selective State Space Models (#5328)
compilade
2024-03-06
ggml : use SYS_get_cpu if SYS_getcpu is not defined (#5906)
Jared Van Bortel
2024-03-04
ggml : fix unknown status (#0)
Georgi Gerganov
2024-03-04
ggml : introduce ggml_status (ggml/750)
Michael Podvitskiy
2024-03-04
add some new ops, fix some operators and add batch operations to certain oper...
leejet
2024-02-28
add google magika inference example (ggml/748)
slaren
2024-02-28
Introduce backend GUIDs (ggml/743)
UEXTM.com
2024-02-28
ggml : make i-quants work with super-blocks of 64 (CPU,Metal) (#5760)
Kawrakow
2024-02-27
IQ4_XS: a 4.25 bpw quantization (#5747)
Kawrakow
2024-02-26
Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range...
Kawrakow
2024-02-25
code : normalize enum names (#5697)
Georgi Gerganov
2024-02-24
IQ3_S: a much better alternative to Q3_K (#5676)
Kawrakow
2024-02-22
ggml : always define ggml_fp16_t as uint16_t (#5666)
Georgi Gerganov
2024-02-21
sync : ggml (#5633)
Georgi Gerganov
2024-02-21
IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)
Kawrakow
2024-02-19
Allow for Vulkan build with Accelerate.
Mathijs de Bruin
2024-02-19
ggml : android and old glibc NUMA incompatibility bugfixes (#5557)
bmwl
2024-02-18
ggml, common, examples, tests : fixed type arguments in printf (#5528)
Herman Semenov
2024-02-18
1.5 bit quantization (#5453)
Kawrakow
2024-02-17
ggml : add ALiBi support for ggml_soft_max_ext (#5488)
Georgi Gerganov
2024-02-17
ci : add an option to fail on compile warning (#3952)
Ananta Bastola
2024-02-16
ggml : add numa options (#5377)
bmwl
2024-02-12
sync : ggml (#5452)
Georgi Gerganov
2024-02-11
ggml : add mmla kernels for quantized GEMM (#4966)
snadampal
2024-02-10
ggml : add abort_callback for cpu backend (ggml/725)
Michael Podvitskiy
2024-02-07
Basic Vulkan Multi-GPU implementation (#5321)
0cc4m
2024-02-05
ggml : avoid duplicating function calls using MIN/MAX macros (#5325)
Dr. Tom Murphy VII Ph.D
2024-01-31
llava : add MobileVLM support (#5132)
JidongZhang-THU
2024-01-31
ggml : limit n_threads to the max n_tasks (#5238)
slaren
2024-01-30
kompute : llama-bench support and ggml_cpu_has_kompute() (#5226)
Jared Van Bortel
2024-01-30
gguf : fix comparison (ggml/715)
Georgi Gerganov
2024-01-30
gguf : add input validation, prevent integer overflows (ggml/709)
Georgi Gerganov
2024-01-30
SOTA 3-bit quants (#5196)
Kawrakow
2024-01-28
ggml : minor type fix (int64_t -> size_t)
Georgi Gerganov
2024-01-28
ggml : add Vulkan backend (#2059)
0cc4m
2024-01-28
ggml : add unified SYCL backend for Intel GPUs (#2690)
Abhilash Majumder
2024-01-27
ggml : check ggml_add src1 type (ggml/708)
Judd
2024-01-26
Add OpenCL add kernel (#5151)
0cc4m
2024-01-26
ggml : update softmax n_task calculation (#5126)
snadampal
2024-01-23
minor : clean-up some warnings and style (#5094)
Georgi Gerganov
2024-01-22
ggml : parallelize FP32 conversion when using BLAS (#5045)
Reinforce-II
2024-01-22
llava : MobileVLM support (#4954)
XiaotaoChen
2024-01-17
ggml : add IQ2 to test-backend-ops + refactoring (#4990)
Georgi Gerganov
2024-01-17
imatrix : offload to GPU support (#4957)
Georgi Gerganov
2024-01-16
ggml : importance matrix support for legacy quants (#4969)
Kawrakow
2024-01-16
ggml : introduce GGML_CALL function annotation (#4850)
Justine Tunney
2024-01-14
Add ability to use importance matrix for all k-quants (#4930)
Kawrakow
[next]