diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2024-03-31 11:56:30 +0300 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-03-31 11:56:30 +0300 |
commit | c50a82ce0f71558cbb8e555146ba124251504b38 (patch) | |
tree | 33d19668ec46ec8ccd8ede47ffa91841a64ad562 | |
parent | 37e7854c104301c5b5323ccc40e07699f3a62c3e (diff) |
readme : update hot topics
-rw-r--r-- | README.md | 2 |
1 files changed, 1 insertions, 1 deletions
@@ -18,12 +18,12 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) ### Hot topics +- Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404 - Fix major bug in Metal batched inference https://github.com/ggerganov/llama.cpp/pull/6225 - Multi-GPU pipeline parallelizm support https://github.com/ggerganov/llama.cpp/pull/6017 - Looking for contributions to add Deepseek support: https://github.com/ggerganov/llama.cpp/issues/5981 - Quantization blind testing: https://github.com/ggerganov/llama.cpp/discussions/5962 - Initial Mamba support has been added: https://github.com/ggerganov/llama.cpp/pull/5328 -- Support loading sharded model, using `gguf-split` CLI https://github.com/ggerganov/llama.cpp/pull/6187 ---- |