diff options
author | Kawrakow <iwankawrakow@gmail.com> | 2025-02-23 14:31:11 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2025-02-23 14:31:11 +0200 |
commit | ac1d259b93eccfa7371c6b00c5749400ff2b2aea (patch) | |
tree | fe8bb34c9dcbea805595c5087f00b188bb89fc05 /examples/tokenize | |
parent | 46bf73a37f1aabe6f0b40365b0c7b2ba831905f5 (diff) |
Fused MoE ffn_up and ffn_gate (#229)
* Fusing MoE up * unary(gate)
* Fusing MoE up * unary(gate): CUDA
We get ~13% speedup for PP-512 and ~2% for TG-128
for DeepSeek-Lite
* On CUDA also fuse MoE down * (up * unary(gate))
in case the MUL_MAT_ID op for the down experts is the next
op in the graph.
* Command line option to enable fused MoE up*unary(gate)
* Add fmoe option to llama-bench
* Adding forgotten gelu, relu, silu on ARM
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'examples/tokenize')
0 files changed, 0 insertions, 0 deletions