| Age | Commit message (Expand) | Author |
|---|---|---|
| 2023-11-19 | Revert "finetune : add --n-gpu-layers flag info to --help (#4128)" | Georgi Gerganov |
| 2023-11-19 | finetune : add --n-gpu-layers flag info to --help (#4128) | Clark Saben |
| 2023-11-17 | train : move number of gpu layers argument parsing to common/train.cpp (#4074) | Jiří Podivín |
| 2023-11-17 | finetune : zero the loraB initial vectors (#4082) | Andrew Godfrey |
| 2023-11-13 | sync : ggml (backend v2) (#3912) | Georgi Gerganov |
| 2023-11-07 | ggml : fix backward rope after YaRN (#3974) | xaedes |
| 2023-11-01 | llama : implement YaRN RoPE scaling (#2268) | cebtenzzre |
| 2023-11-01 | finetune : add -ngl parameter (#3762) | Andrew Godfrey |
| 2023-10-13 | ggml : add context enumeration functions (#3605) | slaren |
| 2023-10-02 | finetune : fix #3404 (#3437) | xaedes |
| 2023-09-29 | train : fix KQ_pos allocation (#3392) | Georgi Gerganov |
| 2023-09-28 | llama.cpp : split llama_context_params into model and context params (#3301) | slaren |
| 2023-09-28 | train : finetune LORA (#2632) | xaedes |
