summaryrefslogtreecommitdiff
path: root/examples/export-lora/README.md
diff options
context:
space:
mode:
authorKawrakow <48489457+ikawrakow@users.noreply.github.com>2024-07-27 07:55:01 +0200
committerGitHub <noreply@github.com>2024-07-27 07:55:01 +0200
commit154e0d75fccf1784fe9ff6fd76a630b66563da3d (patch)
tree81ce6dbb5b1900c1aa78a879f0593c694cab9d27 /examples/export-lora/README.md
parent0684c3e9c70d49323b4fc517128cbe222cab7f96 (diff)
Merge mainline llama.cpp (#3)
* Merging mainline - WIP * Merging mainline - WIP AVX2 and CUDA appear to work. CUDA performance seems slightly (~1-2%) lower as it is so often the case with llama.cpp/ggml after some "improvements" have been made. * Merging mainline - fix Metal * Remove check --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'examples/export-lora/README.md')
-rw-r--r--examples/export-lora/README.md23
1 files changed, 15 insertions, 8 deletions
diff --git a/examples/export-lora/README.md b/examples/export-lora/README.md
index 1fb17fee..91c33c34 100644
--- a/examples/export-lora/README.md
+++ b/examples/export-lora/README.md
@@ -6,12 +6,11 @@ Apply LORA adapters to base model and export the resulting model.
usage: llama-export-lora [options]
options:
- -h, --help show this help message and exit
- -m FNAME, --model-base FNAME model path from which to load base model (default '')
- -o FNAME, --model-out FNAME path to save exported model (default '')
- -l FNAME, --lora FNAME apply LoRA adapter
- -s FNAME S, --lora-scaled FNAME S apply LoRA adapter with user defined scaling S
- -t N, --threads N number of threads to use during computation (default: 4)
+ -m, --model model path from which to load base model (default '')
+ --lora FNAME path to LoRA adapter (can be repeated to use multiple adapters)
+ --lora-scaled FNAME S path to LoRA adapter with user defined scaling S (can be repeated to use multiple adapters)
+ -t, --threads N number of threads to use during computation (default: 4)
+ -o, --output FNAME output file (default: 'ggml-lora-merged-f16.gguf')
```
For example:
@@ -20,7 +19,15 @@ For example:
./bin/llama-export-lora \
-m open-llama-3b-v2-q8_0.gguf \
-o open-llama-3b-v2-q8_0-english2tokipona-chat.gguf \
- -l lora-open-llama-3b-v2-q8_0-english2tokipona-chat-LATEST.bin
+ --lora lora-open-llama-3b-v2-q8_0-english2tokipona-chat-LATEST.gguf
```
-Multiple LORA adapters can be applied by passing multiple `-l FN` or `-s FN S` command line parameters.
+Multiple LORA adapters can be applied by passing multiple `--lora FNAME` or `--lora-scaled FNAME S` command line parameters:
+
+```bash
+./bin/llama-export-lora \
+ -m your_base_model.gguf \
+ -o your_merged_model.gguf \
+ --lora-scaled lora_task_A.gguf 0.5 \
+ --lora-scaled lora_task_B.gguf 0.5
+```