| Age | Commit message (Expand) | Author |
|---|---|---|
| 2023-09-28 | build : enable more non-default compiler warnings (#3200) | Cebtenzzre |
| 2023-09-28 | train : finetune LORA (#2632) | xaedes |
| 2023-09-28 | llama : custom attention mask + parallel decoding + no context swaps (#3228) | Georgi Gerganov |
| 2023-09-15 | check C++ code with -Wmissing-declarations (#3184) | Cebtenzzre |
| 2023-09-01 | build : fix most gcc and clang warnings (#2861) | Cebtenzzre |
| 2023-07-25 | Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384) | Kawrakow |
| 2023-07-24 | make rms_norm_eps a parameter (#2374) | slaren |
| 2023-07-07 | ggml : change ggml_graph_compute() API to not require context (#1999) | Qingyou Meng |
| 2023-06-27 | baby-llama : fix build after ggml_rope change (#2016) | Howard Su |
| 2023-06-16 | build : fix and ignore MSVC warnings (#1889) | Borislav Stanimirov |
| 2023-06-13 | baby-llama : fix operator!= (#1821) | 0xspringtime |
| 2023-06-13 | train : improved training-from-scratch example (#1652) | xaedes |
| 2023-05-13 | ggml : implement backward pass for llama + small training-llama-from-scratch ... | xaedes |
