summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorKawrakow <iwankawrakow@gmail.com>2025-02-27 16:40:49 +0200
committerGitHub <noreply@github.com>2025-02-27 16:40:49 +0200
commitb762db7c9264199c2d0f66e7d63e3b4884f3fc0c (patch)
tree01cc16988a4d21b4c1df367df23f4fd53e6b58a0 /include
parent51029edfdf286df76f9268fc87b9514291b2fe42 (diff)
Option to use MLA without a transposed cache (#235)
The `-mla` command line option turns into an int from a bool. mla = 0: use standard attention mla = 1: use MLA with transposed cache mla > 1: use MLA without transposed cache Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'include')
-rw-r--r--include/llama.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/include/llama.h b/include/llama.h
index beb6ecba..2b33701c 100644
--- a/include/llama.h
+++ b/include/llama.h
@@ -383,7 +383,7 @@ extern "C" {
bool embeddings; // if true, extract embeddings (together with logits)
bool offload_kqv; // whether to offload the KQV ops (including the KV cache) to GPU
bool flash_attn; // whether to use flash attention [EXPERIMENTAL]
- bool mla_attn; // whether to use MLA attention [EXPERIMENTAL]
+ int mla_attn; // whether to use MLA attention [EXPERIMENTAL]
bool fused_moe_up_gate; // whether to use fused MoE up/down op [EXPERIMENTAL]
// Abort callback