summaryrefslogtreecommitdiff
path: root/include/llama.h
diff options
context:
space:
mode:
authorKawrakow <iwankawrakow@gmail.com>2025-02-23 14:31:11 +0200
committerGitHub <noreply@github.com>2025-02-23 14:31:11 +0200
commitac1d259b93eccfa7371c6b00c5749400ff2b2aea (patch)
treefe8bb34c9dcbea805595c5087f00b188bb89fc05 /include/llama.h
parent46bf73a37f1aabe6f0b40365b0c7b2ba831905f5 (diff)
Fused MoE ffn_up and ffn_gate (#229)
* Fusing MoE up * unary(gate) * Fusing MoE up * unary(gate): CUDA We get ~13% speedup for PP-512 and ~2% for TG-128 for DeepSeek-Lite * On CUDA also fuse MoE down * (up * unary(gate)) in case the MUL_MAT_ID op for the down experts is the next op in the graph. * Command line option to enable fused MoE up*unary(gate) * Add fmoe option to llama-bench * Adding forgotten gelu, relu, silu on ARM --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'include/llama.h')
-rw-r--r--include/llama.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/llama.h b/include/llama.h
index b5ad65e7..23e32642 100644
--- a/include/llama.h
+++ b/include/llama.h
@@ -377,6 +377,7 @@ extern "C" {
bool offload_kqv; // whether to offload the KQV ops (including the KV cache) to GPU
bool flash_attn; // whether to use flash attention [EXPERIMENTAL]
bool mla_attn; // whether to use MLA attention [EXPERIMENTAL]
+ bool fused_moe_up_gate; // whether to use fused MoE up/down op [EXPERIMENTAL]
// Abort callback
// if it returns true, execution of llama_decode() will be aborted