diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2023-12-17 20:16:23 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-12-17 20:16:23 +0200 |
commit | b1306c439490c7fa4ec33594500d980d1e9e15e6 (patch) | |
tree | 9f741e6326d7f0246d1ebb07a7bb66bfb2a340e6 | |
parent | 800a489e4a8be199122259a995b1ee9dd7fae320 (diff) |
readme : update hot topics
-rw-r--r-- | README.md | 6 |
1 files changed, 3 insertions, 3 deletions
@@ -10,11 +10,11 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ ### Hot topics +- Collecting Apple Silicon performance stats: + - M-series: https://github.com/ggerganov/llama.cpp/discussions/4167 + - A-series: https://github.com/ggerganov/llama.cpp/discussions/4508 - Added Mixtral support: https://github.com/ggerganov/llama.cpp/pull/4406 -- **llama.h API change for handling KV cache offloading and data type: https://github.com/ggerganov/llama.cpp/pull/4309** -- Using `llama.cpp` with AWS instances: https://github.com/ggerganov/llama.cpp/discussions/4225 - Looking for contributions to improve and maintain the `server` example: https://github.com/ggerganov/llama.cpp/issues/4216 -- Collecting Apple Silicon performance stats: https://github.com/ggerganov/llama.cpp/discussions/4167 ---- |