diff options
author | Kawrakow <iwankawrakow@gmail.com> | 2025-01-29 14:05:41 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2025-01-29 14:05:41 +0200 |
commit | 4a73c250023a74bb1665875bbced7f1a3857b7f6 (patch) | |
tree | fc28c03f78c1715c1c48ac5274ad327368c7137e /examples/export-lora/export-lora.cpp | |
parent | f725576345582144dfebd7f1e6c8ac93eb1eb0ca (diff) |
Various (#181)
* Adding gp option to llama-bench
Similar to pg, but it only looks at TG speed with a given
prompt length.
* Make q8_0_r4 work with tensor row sizes that are not a multiple of 128
They still need to be divisible by 32.
* Make q8_0_r4 work with tensor row sizes that are not a multiple of 128
.. on NEON
* Make q8_0_r4 work with tensor row sizes that are not a multiple of 128
.., on AVX2
* Make q4_0_r4 work with tensor row sizes that are not a multiple of 128
.., on AVX2
* Make q4_0_r4 work with tensor row sizes that are not a multiple of 128
... on NEON
* Make q4_0_r4 work with tensor row sizes that are not a multiple of 128
... on Zen4.
Also fix q8_0 K-cache for head sizes that are not multiple of 128.
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'examples/export-lora/export-lora.cpp')
0 files changed, 0 insertions, 0 deletions