summaryrefslogtreecommitdiff
path: root/examples/server/public_simplechat
diff options
context:
space:
mode:
authorKawrakow <iwankawrakow@gmail.com>2024-09-27 08:16:06 +0300
committerGitHub <noreply@github.com>2024-09-27 08:16:06 +0300
commit6dec4af4b6e65eb72e646a6f8b10d77c9d306281 (patch)
treeb69a6dfdd024ccf6a4d7490666664cbac4bc65ce /examples/server/public_simplechat
parent546f3ef349a7082fbc349897c3c7246baed2a6c6 (diff)
Adding ability to have meta data per tensor row (#61)
* POC: per row scale This is a POC how to work around opinionated ggml to have scales per row rather than per block. Only implemened for Zen4 and only for iq2_tn. * POC per row scale: iq2_tn on NEON * POC per row scale: iq2_tn on Metal * Per row scale Metal templates * iq1_tn: shrink to 1.625 bpw (NEON and Metal) * POC per row scale: CUDA * POC per row scale: add CUDA TODOs There are two places in ggml-cuda.cu left where it is assumed that type_size * n_per_row / block_size is the way to compute and handle row sizes. This does not affect simple usage, but will lead to issues when tensors are split between GPUs. * Per row scales - CUDA The only place left where there are unnecessary assumptions being made is in the Flash Attention code. As we are not using any quants that use per row scales for quantized KV cache, it should be OK for now. * Update IQ1_TN and IQ2_TN bpw shown to user --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Diffstat (limited to 'examples/server/public_simplechat')
0 files changed, 0 insertions, 0 deletions