diff options
author | slaren <slarengh@gmail.com> | 2024-03-13 18:54:21 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-03-13 18:54:21 +0100 |
commit | f30ea47a87ed4446ad55adb265755dc9102956a2 (patch) | |
tree | fc885962ca3d537cfdfbd6b4a2820b7c864b1ee0 /examples/llama.swiftui/llama.cpp.swift | |
parent | d8fd0ccf6ac8b07791ffd1575eed436930854ae3 (diff) |
llama : add pipeline parallelism support (#6017)
* llama : add pipeline parallelism support for batch processing with multiple CUDA GPUs
ggml-ci
* server : add -ub, --ubatch-size parameter
* fix server embedding test
* llama : fix Mamba inference for pipeline parallelism
Tested to work correctly with both `main` and `parallel` examples.
* llama : limit max batch size to n_batch
* add LLAMA_SCHED_MAX_COPIES to configure the number of input copies for pipeline parallelism
default increase to 4 (from 2)
changing this value may improve performance for some systems, but increases memory usage
* fix hip build
* fix sycl build (disable cpy_tensor_async)
* fix hip build
* llama : limit n_batch and n_ubatch to n_ctx during context creation
* llama : fix norm backend
* batched-bench : sync after decode
* swiftui : sync after decode
* ggml : allow ggml_get_rows to use multiple threads if they are available
* check n_ubatch >= n_tokens with non-casual attention
* llama : do not limit n_batch to n_ctx with non-casual attn
* server : construct batch with size of llama_n_batch
* ggml_backend_cpu_graph_compute : fix return value when alloc fails
* llama : better n_batch and n_ubatch comment
* fix merge
* small fix
* reduce default n_batch to 2048
---------
Co-authored-by: Francis Couture-Harpin <git@compilade.net>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Diffstat (limited to 'examples/llama.swiftui/llama.cpp.swift')
-rw-r--r-- | examples/llama.swiftui/llama.cpp.swift/LibLlama.swift | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/examples/llama.swiftui/llama.cpp.swift/LibLlama.swift b/examples/llama.swiftui/llama.cpp.swift/LibLlama.swift index 58fcf40c..c249291a 100644 --- a/examples/llama.swiftui/llama.cpp.swift/LibLlama.swift +++ b/examples/llama.swiftui/llama.cpp.swift/LibLlama.swift @@ -221,6 +221,7 @@ actor LlamaContext { if llama_decode(context, batch) != 0 { print("llama_decode() failed during prompt") } + llama_synchronize(context) let t_pp_end = ggml_time_us() @@ -240,6 +241,7 @@ actor LlamaContext { if llama_decode(context, batch) != 0 { print("llama_decode() failed during text generation") } + llama_synchronize(context) } let t_tg_end = ggml_time_us() |