diff options
author | BarfingLemurs <128182951+BarfingLemurs@users.noreply.github.com> | 2023-09-27 11:30:36 -0400 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-09-27 18:30:36 +0300 |
commit | ffe88a36a913e5792aa383f0726bdbcf632e7191 (patch) | |
tree | 1674d754253bb233833f20245719e48c5b6c74c4 /README.md | |
parent | 99115f3fa654b593099c6719ad30e3f54ce231e1 (diff) |
readme : add some recent perplexity and bpw measurements to READMES, link for k-quants (#3340)
* Update README.md
* Update README.md
* Update README.md with k-quants bpw measurements
Diffstat (limited to 'README.md')
-rw-r--r-- | README.md | 5 |
1 files changed, 5 insertions, 0 deletions
@@ -597,6 +597,11 @@ Several quantization methods are supported. They differ in the resulting model d | 13B | ms/tok @ 8th | - | 73 | 82 | 98 | 105 | 128 | | 13B | bits/weight | 16.0 | 4.5 | 5.0 | 5.5 | 6.0 | 8.5 | +- [k-quants](https://github.com/ggerganov/llama.cpp/pull/1684) +- recent k-quants improvements + - [#2707](https://github.com/ggerganov/llama.cpp/pull/2707) + - [#2807](https://github.com/ggerganov/llama.cpp/pull/2807) + ### Perplexity (measuring model quality) You can use the `perplexity` example to measure perplexity over a given prompt (lower perplexity is better). |