summaryrefslogtreecommitdiff
path: root/ggml.h
AgeCommit message (Expand)Author
2023-12-22ggml : extend `enum ggml_log_level` with `GGML_LOG_LEVEL_DEBUG` (#4579)bobqianic
2023-12-21ggml : change ggml_scale to take a float instead of tensor (#4573)Georgi Gerganov
2023-12-21llama : initial ggml-backend integration (#4520)slaren
2023-12-19ggml : fixed check for _MSC_VER (#4535)Eric Sommerlade
2023-12-18llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)Ebey Abraham
2023-12-14ggml : use ggml_row_size where possible (#4472)slaren
2023-12-14ggml : remove n_dims from ggml_tensor (#4469)slaren
2023-12-14ggml : add ggml_row_size() (fixes llama out of space) (#4461)LostRuins
2023-12-13sync : ggml (SD ops, tests, kernels) (#4444)Georgi Gerganov
2023-12-13llama : add Mixtral support (#4406)slaren
2023-12-12ggml : increased GGML_MAX_PARAMS to allow finetuning of 70b models (#4424)Taikono-Himazin
2023-12-07sync : ggml (new ops, tests, backend, etc.) (#4359)Georgi Gerganov
2023-12-01ggml : add ggml_soft_max_ext (#4256)Georgi Gerganov
2023-11-28ggml : restore abort() in GGML_ASSERT (#4242)Jared Van Bortel
2023-11-17llama : add functions to get the model's metadata (#4013)slaren
2023-11-13ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060)Georgi Gerganov
2023-11-13sync : ggml (backend v2) (#3912)Georgi Gerganov
2023-11-07ggml : fix backward rope after YaRN (#3974)xaedes
2023-11-01llama : implement YaRN RoPE scaling (#2268)cebtenzzre
2023-11-01llama : refactor graph build code (#3837)Georgi Gerganov
2023-10-29ggml : quantization refactoring (#3833)Georgi Gerganov
2023-10-24sync : ggml (conv ops + cuda MSVC fixes) (#3765)Georgi Gerganov
2023-10-20gguf : support big endian platform (#3552)Qin Yue Chen
2023-10-13ggml : add context enumeration functions (#3605)slaren
2023-10-08sync : ggml (ggml-backend) (#3548)Georgi Gerganov
2023-10-04sync : ggml (conv 1d + 2d updates, UB fixes) (#3468)Georgi Gerganov
2023-09-28build : enable more non-default compiler warnings (#3200)Cebtenzzre
2023-09-28ggml_tensor: update the structure comments. (#3283)Hua Jiang
2023-09-28train : finetune LORA (#2632)xaedes
2023-09-28gguf : basic type checking in gguf_get_* (#3346)Cebtenzzre
2023-09-28llama : custom attention mask + parallel decoding + no context swaps (#3228)Georgi Gerganov
2023-09-27metal : reusing llama.cpp logging (#3152)Rickard Hallerbäck
2023-09-15sync : ggml (Metal F32 support + reduce ggml-alloc size) (#3192)Georgi Gerganov
2023-09-12arm64 support for windows (#3007)Eric Sommerlade
2023-08-29ggml : add view_src and view_offs to ggml_tensor for views (#2874)slaren
2023-08-28train : mem usage and other improvements (#2439)xaedes
2023-08-28ggml : sync (mem align to header + conv_transpose_2d fixes + ggml_alloc) (#2852)Georgi Gerganov
2023-08-27gguf : add 64-bit support (GGUF v2) (#2821)Georgi Gerganov
2023-08-27ggml : detect SSSE3 (#2825)Przemysław Pawełczyk
2023-08-23llm : add Falcon support (#2717)Georgi Gerganov
2023-08-22ggml : sync latest (SAM + SD operators, CUDA alibi) (#2709)Georgi Gerganov
2023-08-22ggml : support CUDA's half type for aarch64(#1455) (#2670)Kylin
2023-08-21gguf : new file format with flexible meta data (beta) (#2398)Georgi Gerganov
2023-08-20ggml : move all type info to ggml_type_traits (#2663)slaren
2023-08-07ggml : sync (custom ops) (#2537)Georgi Gerganov
2023-07-30ggml : add graph tensor allocator (#2411)slaren
2023-07-26ggml : allocate graphs in a context (#2392)slaren
2023-07-25ggml : improve graph build time via hash table lookup (#2329)slaren
2023-07-24make rms_norm_eps a parameter (#2374)slaren
2023-07-24ggml : sync (unary ops refactor, static-correctness) (#2370)Georgi Gerganov