diff options
author | Kawrakow <iwankawrakow@gmail.com> | 2025-03-01 08:25:27 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2025-03-01 08:25:27 +0200 |
commit | a79ab8f34222e1e0142a30eaa97e78ad077abca9 (patch) | |
tree | 24f89079780736d697347e1ebbe6544750534e22 /include/llama.h | |
parent | b762db7c9264199c2d0f66e7d63e3b4884f3fc0c (diff) |
Reduce size of compute buffers (#237)
* This reduces compute buffer size for MLA
* This should accomplish it for standard attention
* Much better
* Better concat for contiguous tensors
If all the op does is to concatenate the second tensor
to the first, why would we want to have a loop?
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'include/llama.h')
-rw-r--r-- | include/llama.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/llama.h b/include/llama.h index 2b33701c..bb43aebc 100644 --- a/include/llama.h +++ b/include/llama.h @@ -384,6 +384,7 @@ extern "C" { bool offload_kqv; // whether to offload the KQV ops (including the KV cache) to GPU bool flash_attn; // whether to use flash attention [EXPERIMENTAL] int mla_attn; // whether to use MLA attention [EXPERIMENTAL] + int attn_max_batch; // maximum batch size for attention computations [EXPERIMENTAL] bool fused_moe_up_gate; // whether to use fused MoE up/down op [EXPERIMENTAL] // Abort callback |