diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2023-11-02 20:44:12 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-11-02 20:44:12 +0200 |
commit | 224e7d5b14cbabab7ae45c64db2cfde979c8455d (patch) | |
tree | d8b7dde2185863a922c29948f8a02f3cc496d023 | |
parent | c7743fe1c1cbda5a886362aa371480360580fdf0 (diff) |
readme : add notice about #3912
-rw-r--r-- | README.md | 4 |
1 files changed, 1 insertions, 3 deletions
@@ -2,7 +2,6 @@  -[](https://github.com/ggerganov/llama.cpp/actions) [](https://opensource.org/licenses/MIT) [Roadmap](https://github.com/users/ggerganov/projects/7) / [Project status](https://github.com/ggerganov/llama.cpp/discussions/3471) / [Manifesto](https://github.com/ggerganov/llama.cpp/discussions/205) / [ggml](https://github.com/ggerganov/ggml) @@ -11,8 +10,7 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ ### Hot topics -- LLaVA support: https://github.com/ggerganov/llama.cpp/pull/3436 -- ‼️ BPE tokenizer update: existing Falcon and Starcoder `.gguf` models will need to be reconverted: [#3252](https://github.com/ggerganov/llama.cpp/pull/3252) +- ⚠️ **Upcoming change that might break functionality. Help with testing is needed:** https://github.com/ggerganov/llama.cpp/pull/3912 ---- |