Age | Commit message (Expand) | Author |
---|---|---|
2024-01-31 | llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240) | Georgi Gerganov |
2024-01-03 | train : fix typo in overlapping-samples help msg (#4758) | Daniel Bevenius |
2023-12-14 | ggml : remove n_dims from ggml_tensor (#4469) | slaren |
2023-11-17 | train : move number of gpu layers argument parsing to common/train.cpp (#4074) | Jiří Podivín |
2023-11-13 | sync : ggml (backend v2) (#3912) | Georgi Gerganov |
2023-11-01 | finetune : add -ngl parameter (#3762) | Andrew Godfrey |
2023-10-23 | llama : remove token functions with `context` args in favor of `model` (#3720) | Marcus Dunn |
2023-10-20 | ggml : fix rope + llama minor optimizations (#3560) | Herman Semenov |
2023-10-17 | tokenizer : special token handling (#3538) | staviq |
2023-09-28 | llama.cpp : split llama_context_params into model and context params (#3301) | slaren |
2023-09-28 | train : finetune LORA (#2632) | xaedes |