index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml.c
Age
Commit message (
Expand
)
Author
2024-05-04
gguf-split: add --no-tensor-first-split (#7072)
Xuan Son Nguyen
2024-04-30
ggml : add Flash Attention (#5021)
Georgi Gerganov
2024-04-28
gguf : enforce that tensor names are unique (#6905)
Xuan Son Nguyen
2024-04-26
gguf : fix mismatch between alloc and free functions (#6929)
slaren
2024-04-26
Merge pull request from GHSA-p5mv-gjc5-mwqv
Georgi Gerganov
2024-04-25
ggml : fix redefinition of vaddvq_f32 for 32-bit ARM (#6906)
Georgi Gerganov
2024-04-22
llamafile : improve sgemm.cpp (#6796)
Justine Tunney
2024-04-18
ggml : group all experts in a single ggml_mul_mat_id (#6505)
slaren
2024-04-16
ggml : fix llamafile sgemm wdata offsets (#6710)
Georgi Gerganov
2024-04-16
ggml : add llamafile sgemm (#6414)
Justine Tunney
2024-04-12
metal : unify mul_mv_id kernels (#6556)
slaren
2024-04-12
llama : add gguf_remove_key + remove split meta during quantize (#6591)
jiez
2024-04-09
llama : add Command R Plus support (#6491)
Carolinabanana
2024-04-03
ggml : mul_mat_id use the same tensor for all the experts (#6387)
slaren
2024-03-29
Vulkan k-quant mmq and ggml-backend offload functionality (#6155)
0cc4m
2024-03-27
ggml : fix bounds checking of zero size views (#6347)
slaren
2024-03-26
llama : greatly reduce output buffer memory usage (#6122)
compilade
2024-03-26
IQ1_M: 1.75 bpw quantization (#6302)
Kawrakow
2024-03-26
cuda : rename build flag to LLAMA_CUDA (#6299)
slaren
2024-03-24
Fix heap corruption from wmode out-of-bound writes on windows (#6272)
Rick G
2024-03-24
[SYCL] offload op (#6217)
Meng, Hengyu
2024-03-23
use _wfopen instead of fopen on Windows (#6248)
Jared Van Bortel
2024-03-18
backend : offload large batches to GPU (#6083)
slaren
2024-03-16
ggml : add AVX512F SIMD (#6088)
AmirAli Mirian
2024-03-15
gguf : add support for I64 and F64 arrays (#6062)
Ondřej Čertík
2024-03-13
llama : add pipeline parallelism support (#6017)
slaren
2024-03-11
ggml, ci : Windows ARM runner and build fixes (#5979)
Michael Podvitskiy
2024-03-09
ggml : remove old quantization functions (#5942)
Georgi Gerganov
2024-03-08
llama : support Mamba Selective State Space Models (#5328)
compilade
2024-03-06
ggml : use SYS_get_cpu if SYS_getcpu is not defined (#5906)
Jared Van Bortel
2024-03-04
ggml : fix unknown status (#0)
Georgi Gerganov
2024-03-04
ggml : introduce ggml_status (ggml/750)
Michael Podvitskiy
2024-03-04
add some new ops, fix some operators and add batch operations to certain oper...
leejet
2024-02-28
add google magika inference example (ggml/748)
slaren
2024-02-28
Introduce backend GUIDs (ggml/743)
UEXTM.com
2024-02-28
ggml : make i-quants work with super-blocks of 64 (CPU,Metal) (#5760)
Kawrakow
2024-02-27
IQ4_XS: a 4.25 bpw quantization (#5747)
Kawrakow
2024-02-26
Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range...
Kawrakow
2024-02-25
code : normalize enum names (#5697)
Georgi Gerganov
2024-02-24
IQ3_S: a much better alternative to Q3_K (#5676)
Kawrakow
2024-02-22
ggml : always define ggml_fp16_t as uint16_t (#5666)
Georgi Gerganov
2024-02-21
sync : ggml (#5633)
Georgi Gerganov
2024-02-21
IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)
Kawrakow
2024-02-19
Allow for Vulkan build with Accelerate.
Mathijs de Bruin
2024-02-19
ggml : android and old glibc NUMA incompatibility bugfixes (#5557)
bmwl
2024-02-18
ggml, common, examples, tests : fixed type arguments in printf (#5528)
Herman Semenov
2024-02-18
1.5 bit quantization (#5453)
Kawrakow
2024-02-17
ggml : add ALiBi support for ggml_soft_max_ext (#5488)
Georgi Gerganov
2024-02-17
ci : add an option to fail on compile warning (#3952)
Ananta Bastola
2024-02-16
ggml : add numa options (#5377)
bmwl
[next]