summaryrefslogtreecommitdiff
path: root/examples/perplexity
diff options
context:
space:
mode:
authorKawrakow <iwankawrakow@gmail.com>2025-03-12 07:21:46 +0200
committerGitHub <noreply@github.com>2025-03-12 07:21:46 +0200
commit3f23ed68f17583a8ee63afd0c214f5b39226226c (patch)
treead86914fd2925935247d2fba0ebb3b8b5d2c9bfc /examples/perplexity
parenta48e16324770bb829406d06e11be1df0c8a3b517 (diff)
MLA-2: Allow usage of q8_0 for KV cache on CUDA (#252)
* FlashMLA(CUDA): WIP to allow q8_0 quantized cache * WIP * FlashMLA(CUDA) - allow q8_0 for KV cache This works, and PP is not bad, but TG is still quite a bit slower. * FlashMLA(CUDA) - allow q8_0 for KV cache This is better. ~9% slower than f16 cache for short contexts, nearly on par at 16k tokens. --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'examples/perplexity')
0 files changed, 0 insertions, 0 deletions