index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
finetune
/
finetune.cpp
Age
Commit message (
Expand
)
Author
2024-05-08
ggml : introduce bfloat16 support (#6412)
Justine Tunney
2024-02-25
code : normalize enum names (#5697)
Georgi Gerganov
2024-02-13
finetune : rename feed-forward tensors (w1/w2/w3) (#4839)
Daniel Bevenius
2024-02-12
sync : ggml (#5452)
Georgi Gerganov
2024-01-22
finetune : print sample-start/include-sample-start (#5072)
Daniel Bevenius
2024-01-16
finetune : add training data file to log message (#4979)
Daniel Bevenius
2024-01-16
finetune : use LLAMA_FILE_MAGIC_GGLA (#4961)
Daniel Bevenius
2024-01-04
finetune : remove unused includes (#4756)
Daniel Bevenius
2023-12-27
finetune : fix output formatting in print_params (#4653)
Daniel Bevenius
2023-12-21
ggml : change ggml_scale to take a float instead of tensor (#4573)
Georgi Gerganov
2023-12-17
finetune : keep allocs alive until all allocations are done (#4486)
slaren
2023-12-14
ggml : remove n_dims from ggml_tensor (#4469)
slaren
2023-11-19
Revert "finetune : add --n-gpu-layers flag info to --help (#4128)"
Georgi Gerganov
2023-11-19
finetune : add --n-gpu-layers flag info to --help (#4128)
Clark Saben
2023-11-17
train : move number of gpu layers argument parsing to common/train.cpp (#4074)
Jiří Podivín
2023-11-17
finetune : zero the loraB initial vectors (#4082)
Andrew Godfrey
2023-11-13
sync : ggml (backend v2) (#3912)
Georgi Gerganov
2023-11-07
ggml : fix backward rope after YaRN (#3974)
xaedes
2023-11-01
llama : implement YaRN RoPE scaling (#2268)
cebtenzzre
2023-11-01
finetune : add -ngl parameter (#3762)
Andrew Godfrey
2023-10-13
ggml : add context enumeration functions (#3605)
slaren
2023-10-02
finetune : fix #3404 (#3437)
xaedes
2023-09-29
train : fix KQ_pos allocation (#3392)
Georgi Gerganov
2023-09-28
llama.cpp : split llama_context_params into model and context params (#3301)
slaren
2023-09-28
train : finetune LORA (#2632)
xaedes