summaryrefslogtreecommitdiff
path: root/gguf-py
diff options
context:
space:
mode:
authorZheng.Deng <32841220+dengzheng-cloud@users.noreply.github.com>2024-04-17 04:51:07 +0800
committerGitHub <noreply@github.com>2024-04-16 23:51:07 +0300
commitfacb8b56f8fd3bb10a693bf0943ae9d69d0828ef (patch)
tree169ecebed53b9047b7f234e673846fb1a84ab229 /gguf-py
parent532c1737a14bb4b99747e6f460874947df37e450 (diff)
convert : fix autoawq gemma (#6704)
* fix autoawq quantized gemma model convert error using autoawq to quantize gemma model will include a lm_head.weight tensor in model-00001-of-00002.safetensors. it result in this situation that convert-hf-to-gguf.py can't map lm_head.weight. skip loading this tensor could prevent this error. * change code to full string match and print necessary message change code to full string match and print a short message to inform users that lm_head.weight has been skipped. --------- Co-authored-by: Zheng.Deng <32841220+CUGfred@users.noreply.github.com>
Diffstat (limited to 'gguf-py')
0 files changed, 0 insertions, 0 deletions