diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2024-05-07 21:43:13 +0300 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-05-07 21:43:13 +0300 |
commit | 53d6c52e227dedef347b21e28febcfb9caeecdad (patch) | |
tree | b41015000f1ef1aee1e21e98b8df7019e789ed14 | |
parent | 3af34c1d1b0da47f85b95f60922abeded1cb5d33 (diff) |
readme : update hot topics
-rw-r--r-- | README.md | 3 |
1 files changed, 2 insertions, 1 deletions
@@ -20,7 +20,8 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) ### Hot topics -- **BPE pre-tokenization support has been added: https://github.com/ggerganov/llama.cpp/pull/6920** +- **Initial Flash-Attention support: https://github.com/ggerganov/llama.cpp/pull/5021** +- BPE pre-tokenization support has been added: https://github.com/ggerganov/llama.cpp/pull/6920 - MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387 - Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404 - Fix major bug in Metal batched inference https://github.com/ggerganov/llama.cpp/pull/6225 |