index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
iqk_mul_mat.cpp
Age
Commit message (
Expand
)
Author
2024-07-18
iqk_mul_mat(f16): make it work for row sizes that are multiple of 4 on NEON
Iwan Kawrakow
2024-07-18
iqk_mul_mat(float): make it work for row sizes that are multiple of 4 on AVX2
Iwan Kawrakow
2024-07-17
iq1bn: faster AVX2
Iwan Kawrakow
2024-07-17
iq1bn(no lookup): better version
Iwan Kawrakow
2024-07-16
iq1bn(no lookup): NEON attempts
Iwan Kawrakow
2024-07-15
iq1bn(no lookup): NEON
Iwan Kawrakow
2024-07-15
iq1bn(no lookup): somewhat better
Iwan Kawrakow
2024-07-15
iq1bn: attempt without a lookup table
Iwan Kawrakow
2024-06-25
bitnet: remove iq1_bn lookup table storing +/- signs
Iwan Kawrakow
2024-06-25
bitnet: simdify q8_K64 quantization on AVX
Iwan Kawrakow
2024-06-25
bitnet: NEON improvements for iq1_bn
Iwan Kawrakow
2024-06-25
Bitnet: adapt NEON and Metal to the alternative grid
Iwan Kawrakow
2024-06-25
Bitnet: trying an alternative iq1_bn grid
Iwan Kawrakow
2024-06-25
Bitnet: slightly faster 1.625 bpw variant for AVX512VL
Iwan Kawrakow
2024-06-22
iqk_mul_mat: add IQ4_NL also on NEON
Iwan Kawrakow
2024-06-22
iqk_mul_mat: add IQ4_NL
Iwan Kawrakow
2024-06-22
bitnet(scale in a separate tensor): CPU tweaks
Iwan Kawrakow
2024-06-22
bitnet(scale in a separate tensor): CPU tweaks
Iwan Kawrakow
2024-06-22
bitnet(scale in a separate tensor): more CPU improvements
Iwan Kawrakow
2024-06-22
bitnet(scale in a separate tensor): CPU improvements
Iwan Kawrakow
2024-06-22
bitnet: put the scale in a separate tensor
Iwan Kawrakow
2024-06-22
Bitnet(1.75 bpw): higher precision fp8 scale
Iwan Kawrakow
2024-06-22
Bitnet(2.25 bpw): NEON
Iwan Kawrakow
2024-06-22
Bitnet: 2.25 bpw version
Iwan Kawrakow
2024-06-22
bitnet 2 bpw: NEON implementation
Iwan Kawrakow
2024-06-22
Removed extra column
Iwan Kawrakow
2024-06-22
bitnet 2 bpw: AVX2 implementation
Iwan Kawrakow
2024-06-22
iqk_mul_mat(bitnet): fix typo
Iwan Kawrakow
2024-06-22
iqk_mul_mat(bitnet): slightly faster AVX2
Iwan Kawrakow
2024-06-22
iq1_bn: better NEON implementation
Iwan Kawrakow
2024-06-22
iq1_bn(NEON): works now, but very slow
Iwan Kawrakow
2024-06-22
iqk_mul_mat(iq1_bn): WIP NEON - don't see why it is not working
Iwan Kawrakow
2024-06-22
iqk_mul_mat(iq1_bn): WIP NEON (not working)
Iwan Kawrakow
2024-06-22
iqk_mul_mat: improve iq1_bn (bitnet) on vanilla AVX2
Iwan Kawrakow
2024-06-22
iqk_mul_mat: improve iq1_bn (bitnet) on AVX2
Iwan Kawrakow
2024-06-22
bitnet: scale is per row, not per tensor
Iwan Kawrakow
2024-06-22
iqk_mul_mat: add iq1_bn (bitnet)
Iwan Kawrakow
2024-06-22
iqk_mul_mat: cleanup
Iwan Kawrakow
2024-06-22
iqk_mul_mat: be independent of llamafile_sgemm
Iwan Kawrakow
2024-06-22
iqk_mul_mat: be independent of llamafile_sgemm (WIP)
Iwan Kawrakow
2024-06-22
iqk_mul_mat: be able to handle any f16/f32 combination on AVX2
Iwan Kawrakow
2024-06-22
iqk_mul_mat: turn on AVX512
Iwan Kawrakow
2024-06-22
iqk_mul_mat: slightly better fp16 with 16 vector registers
Iwan Kawrakow
2024-06-22
iqk_mul_mat: better fp16 for AVX2
Iwan Kawrakow
2024-06-22
iqk_mul_mat: fp16 for Arm
Iwan Kawrakow
2024-06-22
iqk_mul_mat: slightly faster FANCY_SIMD dot product
Iwan Kawrakow
2024-06-22
iqk_mul_mat: fix q8_0
Iwan Kawrakow
2024-06-22
iqk_mul_mat: use block_q8_1_x4 also for AVX2
Iwan Kawrakow
2024-06-22
iqk_mul_mat: use block_q8_0_x4 also for AVX2
Iwan Kawrakow
2024-06-22
iqk_mul_mat: delete unused stuff
Iwan Kawrakow
[next]