diff options
author | Kawrakow <iwankawrakow@gmail.com> | 2025-01-10 15:06:00 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2025-01-10 15:06:00 +0200 |
commit | b1363b6177661556750c110cf876e044e61af365 (patch) | |
tree | 5314e735bffc0eba02dd6c028e01cdd5fc863b02 /ggml/src/ggml-vulkan.cpp | |
parent | 3e6851621c54e8424196810f2798811f069bcff1 (diff) |
Falcon3 changes (#168)
* Add Falcon3 pre-tokinizer (same as llama3)
* q8_k16: use integer arithmetic to sum row values
The existing implementation that just sums up the f32 quantizations
works fine for the original BitNet models and also for the TriLM
ternary models. But for Falcon3 I see a significant difference between
the CPU and the GPU perplexity. If I use the q8_K16 int8_t quants to sum
up the values in a row, then the CPU-GPU PPL difference becomes much
smaller, and we get a lower PPL than Microsoft BitNet, which claims
to be "losless".
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'ggml/src/ggml-vulkan.cpp')
0 files changed, 0 insertions, 0 deletions