From ac1d259b93eccfa7371c6b00c5749400ff2b2aea Mon Sep 17 00:00:00 2001 From: Kawrakow Date: Sun, 23 Feb 2025 14:31:11 +0200 Subject: Fused MoE ffn_up and ffn_gate (#229) * Fusing MoE up * unary(gate) * Fusing MoE up * unary(gate): CUDA We get ~13% speedup for PP-512 and ~2% for TG-128 for DeepSeek-Lite * On CUDA also fuse MoE down * (up * unary(gate)) in case the MUL_MAT_ID op for the down experts is the next op in the graph. * Command line option to enable fused MoE up*unary(gate) * Add fmoe option to llama-bench * Adding forgotten gelu, relu, silu on ARM --------- Co-authored-by: Iwan Kawrakow --- include/llama.h | 1 + 1 file changed, 1 insertion(+) (limited to 'include') diff --git a/include/llama.h b/include/llama.h index b5ad65e7..23e32642 100644 --- a/include/llama.h +++ b/include/llama.h @@ -377,6 +377,7 @@ extern "C" { bool offload_kqv; // whether to offload the KQV ops (including the KV cache) to GPU bool flash_attn; // whether to use flash attention [EXPERIMENTAL] bool mla_attn; // whether to use MLA attention [EXPERIMENTAL] + bool fused_moe_up_gate; // whether to use fused MoE up/down op [EXPERIMENTAL] // Abort callback // if it returns true, execution of llama_decode() will be aborted -- cgit v1.2.3