summaryrefslogtreecommitdiff
path: root/examples/finetune/finetune.cpp
AgeCommit message (Expand)Author
2024-05-08ggml : introduce bfloat16 support (#6412)Justine Tunney
2024-02-25code : normalize enum names (#5697)Georgi Gerganov
2024-02-13finetune : rename feed-forward tensors (w1/w2/w3) (#4839)Daniel Bevenius
2024-02-12sync : ggml (#5452)Georgi Gerganov
2024-01-22finetune : print sample-start/include-sample-start (#5072)Daniel Bevenius
2024-01-16finetune : add training data file to log message (#4979)Daniel Bevenius
2024-01-16finetune : use LLAMA_FILE_MAGIC_GGLA (#4961)Daniel Bevenius
2024-01-04finetune : remove unused includes (#4756)Daniel Bevenius
2023-12-27finetune : fix output formatting in print_params (#4653)Daniel Bevenius
2023-12-21ggml : change ggml_scale to take a float instead of tensor (#4573)Georgi Gerganov
2023-12-17finetune : keep allocs alive until all allocations are done (#4486)slaren
2023-12-14ggml : remove n_dims from ggml_tensor (#4469)slaren
2023-11-19Revert "finetune : add --n-gpu-layers flag info to --help (#4128)"Georgi Gerganov
2023-11-19finetune : add --n-gpu-layers flag info to --help (#4128)Clark Saben
2023-11-17train : move number of gpu layers argument parsing to common/train.cpp (#4074)Jiří Podivín
2023-11-17finetune : zero the loraB initial vectors (#4082)Andrew Godfrey
2023-11-13sync : ggml (backend v2) (#3912)Georgi Gerganov
2023-11-07ggml : fix backward rope after YaRN (#3974)xaedes
2023-11-01llama : implement YaRN RoPE scaling (#2268)cebtenzzre
2023-11-01finetune : add -ngl parameter (#3762)Andrew Godfrey
2023-10-13ggml : add context enumeration functions (#3605)slaren
2023-10-02finetune : fix #3404 (#3437)xaedes
2023-09-29train : fix KQ_pos allocation (#3392)Georgi Gerganov
2023-09-28llama.cpp : split llama_context_params into model and context params (#3301)slaren
2023-09-28train : finetune LORA (#2632)xaedes