diff options
author | BarfingLemurs <128182951+BarfingLemurs@users.noreply.github.com> | 2023-09-29 08:50:35 -0400 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-09-29 15:50:35 +0300 |
commit | 0a4a4a098261ddd26480371eaccfe90d1bf6488a (patch) | |
tree | 6ffc1945f5466e9dfb631a8cb08627b960697726 | |
parent | 569550df20c1ede59ff195a6b6e900957ad84d16 (diff) |
readme : update hot topics + model links (#3399)
-rw-r--r-- | README.md | 5 |
1 files changed, 3 insertions, 2 deletions
@@ -11,7 +11,7 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ ### Hot topics -- Parallel decoding + continuous batching support incoming: [#3228](https://github.com/ggerganov/llama.cpp/pull/3228) \ +- Parallel decoding + continuous batching support added: [#3228](https://github.com/ggerganov/llama.cpp/pull/3228) \ **Devs should become familiar with the new API** - Local Falcon 180B inference on Mac Studio @@ -92,7 +92,8 @@ as the main playground for developing new features for the [ggml](https://github - [X] [WizardLM](https://github.com/nlpxucan/WizardLM) - [X] [Baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B) and its derivations (such as [baichuan-7b-sft](https://huggingface.co/hiyouga/baichuan-7b-sft)) - [X] [Aquila-7B](https://huggingface.co/BAAI/Aquila-7B) / [AquilaChat-7B](https://huggingface.co/BAAI/AquilaChat-7B) -- [X] Mistral AI v0.1 +- [X] [Starcoder models](https://github.com/ggerganov/llama.cpp/pull/3187) +- [X] [Mistral AI v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) **Bindings:** |