summaryrefslogtreecommitdiff
path: root/examples/embedding/embedding.cpp
diff options
context:
space:
mode:
authorKawrakow <iwankawrakow@gmail.com>2025-03-01 08:25:27 +0200
committerGitHub <noreply@github.com>2025-03-01 08:25:27 +0200
commita79ab8f34222e1e0142a30eaa97e78ad077abca9 (patch)
tree24f89079780736d697347e1ebbe6544750534e22 /examples/embedding/embedding.cpp
parentb762db7c9264199c2d0f66e7d63e3b4884f3fc0c (diff)
Reduce size of compute buffers (#237)
* This reduces compute buffer size for MLA * This should accomplish it for standard attention * Much better * Better concat for contiguous tensors If all the op does is to concatenate the second tensor to the first, why would we want to have a loop? --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'examples/embedding/embedding.cpp')
0 files changed, 0 insertions, 0 deletions