diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2024-03-10 20:58:26 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-03-10 20:58:26 +0200 |
commit | d9f65c97c3dc3aa6fa27470b8c6e69b437ec1a27 (patch) | |
tree | 6fc0673bb3e96c6601b7f36d23b722f63034cc66 /README.md | |
parent | b838b53ad6de2e53f23ddf8f3ad5e6891cc3dd05 (diff) |
readme : update hot topics
Diffstat (limited to 'README.md')
-rw-r--r-- | README.md | 7 |
1 files changed, 2 insertions, 5 deletions
@@ -8,11 +8,6 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) in pure C/C++ -> [!IMPORTANT] -> **Quantization blind testing: https://github.com/ggerganov/llama.cpp/discussions/5962** -> -> Vote for which quantization type provides better responses, all other parameters being the same. - ### Recent API changes - [2024 Mar 8] `llama_kv_cache_seq_rm()` returns a `bool` instead of `void`, and new `llama_n_max_seq()` returns the upper limit of acceptable `seq_id` in batches (relevant when dealing with multiple sequences) https://github.com/ggerganov/llama.cpp/pull/5328 @@ -21,6 +16,8 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) ### Hot topics +- Looking for contributions to add Deepseek support: https://github.com/ggerganov/llama.cpp/issues/5981 +- Quantization blind testing: https://github.com/ggerganov/llama.cpp/discussions/5962 - Initial Mamba support has been added: https://github.com/ggerganov/llama.cpp/pull/5328 ---- |