diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2024-04-29 17:06:19 +0300 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-04-29 17:06:19 +0300 |
commit | 24affa7db3c9db148854b0ab4fd63de8bca7d898 (patch) | |
tree | 7373756ea372632f6cb2b856198e9888a223b01b | |
parent | f4ab2a41476600a98067a9474ea8f9e6db41bcfa (diff) |
readme : update hot topics
-rw-r--r-- | README.md | 3 |
1 files changed, 2 insertions, 1 deletions
@@ -20,7 +20,8 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) ### Hot topics -- **MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387** +- **BPE pre-tokenization support has been added: https://github.com/ggerganov/llama.cpp/pull/6920** +- MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387 - Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404 - Fix major bug in Metal batched inference https://github.com/ggerganov/llama.cpp/pull/6225 - Multi-GPU pipeline parallelism support https://github.com/ggerganov/llama.cpp/pull/6017 |