summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorIan Scrivener <github@zilogy.asia>2023-10-12 22:10:50 +1100
committerGitHub <noreply@github.com>2023-10-12 14:10:50 +0300
commitf3040beaab5228b1a9dfe5675a200379478f7204 (patch)
tree6ee6463d9519c28f5b6d423eb3250948ff1f4230 /README.md
parent1a8c8795d64b04df96c28f29faac2d6e256f53bc (diff)
typo : it is `--n-gpu-layers` not `--gpu-layers` (#3592)
fixed a typo in the MacOS Metal run doco
Diffstat (limited to 'README.md')
-rw-r--r--README.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/README.md b/README.md
index 0f1fd756..60f14a1f 100644
--- a/README.md
+++ b/README.md
@@ -279,7 +279,7 @@ In order to build llama.cpp you have three different options.
On MacOS, Metal is enabled by default. Using Metal makes the computation run on the GPU.
To disable the Metal build at compile time use the `LLAMA_NO_METAL=1` flag or the `LLAMA_METAL=OFF` cmake option.
-When built with Metal support, you can explicitly disable GPU inference with the `--gpu-layers|-ngl 0` command-line
+When built with Metal support, you can explicitly disable GPU inference with the `--n-gpu-layers|-ngl 0` command-line
argument.
### MPI Build