diff options
author | Jun Jie <71215065+junnjiee16@users.noreply.github.com> | 2024-04-05 01:16:37 +0800 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-04-04 13:16:37 -0400 |
commit | b660a5729e1e7508671d3d0515fd7efaeaeb85b9 (patch) | |
tree | 02d84a6a7682f0ca56ce7d80b207b3333abf9321 | |
parent | 0a1d889e27d6aaa3293dd2c692b849a9bcf4b474 (diff) |
readme : fix typo (#6481)
-rw-r--r-- | README.md | 2 |
1 files changed, 1 insertions, 1 deletions
@@ -21,7 +21,7 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) - **MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387** - Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404 - Fix major bug in Metal batched inference https://github.com/ggerganov/llama.cpp/pull/6225 -- Multi-GPU pipeline parallelizm support https://github.com/ggerganov/llama.cpp/pull/6017 +- Multi-GPU pipeline parallelism support https://github.com/ggerganov/llama.cpp/pull/6017 - Looking for contributions to add Deepseek support: https://github.com/ggerganov/llama.cpp/issues/5981 - Quantization blind testing: https://github.com/ggerganov/llama.cpp/discussions/5962 - Initial Mamba support has been added: https://github.com/ggerganov/llama.cpp/pull/5328 |