summaryrefslogtreecommitdiff
path: root/examples/llava
diff options
context:
space:
mode:
author0cc4m <picard12@live.de>2024-02-07 07:54:50 +0100
committerGitHub <noreply@github.com>2024-02-07 07:54:50 +0100
commitee1628bdfea8b0079fed0140ac2f00ef1b465b57 (patch)
tree42ee597afa79a6c4e0bb772d78a7cfcd54777696 /examples/llava
parented0bf32290ee5b30ffad5becd99cbecef74aedd7 (diff)
Basic Vulkan Multi-GPU implementation (#5321)
* Initial Vulkan multi-gpu implementation Move most global variables into backend context * Add names to backend device functions * Add further missing cleanup code * Reduce code duplication in tensor split layer assignment * generalize LLAMA_SPLIT_LAYER for all backends, do not expose device count and memory in llama.h * Only do device info print in the beginning and initialize one backend for cpu assist Add missing cleanup code * Rework backend memory management to make sure devices and buffers get properly allocated and freed * Rename cpu assist free function --------- Co-authored-by: slaren <slarengh@gmail.com>
Diffstat (limited to 'examples/llava')
0 files changed, 0 insertions, 0 deletions