summaryrefslogtreecommitdiff
path: root/iqk_mul_mat.cpp
AgeCommit message (Expand)Author
2024-07-18iqk_mul_mat(f16): make it work for row sizes that are multiple of 4 on NEONIwan Kawrakow
2024-07-18iqk_mul_mat(float): make it work for row sizes that are multiple of 4 on AVX2Iwan Kawrakow
2024-07-17iq1bn: faster AVX2Iwan Kawrakow
2024-07-17iq1bn(no lookup): better versionIwan Kawrakow
2024-07-16iq1bn(no lookup): NEON attemptsIwan Kawrakow
2024-07-15iq1bn(no lookup): NEONIwan Kawrakow
2024-07-15iq1bn(no lookup): somewhat betterIwan Kawrakow
2024-07-15iq1bn: attempt without a lookup tableIwan Kawrakow
2024-06-25bitnet: remove iq1_bn lookup table storing +/- signsIwan Kawrakow
2024-06-25bitnet: simdify q8_K64 quantization on AVXIwan Kawrakow
2024-06-25bitnet: NEON improvements for iq1_bnIwan Kawrakow
2024-06-25Bitnet: adapt NEON and Metal to the alternative gridIwan Kawrakow
2024-06-25Bitnet: trying an alternative iq1_bn gridIwan Kawrakow
2024-06-25Bitnet: slightly faster 1.625 bpw variant for AVX512VLIwan Kawrakow
2024-06-22iqk_mul_mat: add IQ4_NL also on NEONIwan Kawrakow
2024-06-22iqk_mul_mat: add IQ4_NLIwan Kawrakow
2024-06-22bitnet(scale in a separate tensor): CPU tweaksIwan Kawrakow
2024-06-22bitnet(scale in a separate tensor): CPU tweaksIwan Kawrakow
2024-06-22bitnet(scale in a separate tensor): more CPU improvementsIwan Kawrakow
2024-06-22bitnet(scale in a separate tensor): CPU improvementsIwan Kawrakow
2024-06-22bitnet: put the scale in a separate tensorIwan Kawrakow
2024-06-22Bitnet(1.75 bpw): higher precision fp8 scaleIwan Kawrakow
2024-06-22Bitnet(2.25 bpw): NEONIwan Kawrakow
2024-06-22Bitnet: 2.25 bpw versionIwan Kawrakow
2024-06-22bitnet 2 bpw: NEON implementationIwan Kawrakow
2024-06-22Removed extra columnIwan Kawrakow
2024-06-22bitnet 2 bpw: AVX2 implementationIwan Kawrakow
2024-06-22iqk_mul_mat(bitnet): fix typoIwan Kawrakow
2024-06-22iqk_mul_mat(bitnet): slightly faster AVX2Iwan Kawrakow
2024-06-22iq1_bn: better NEON implementationIwan Kawrakow
2024-06-22iq1_bn(NEON): works now, but very slowIwan Kawrakow
2024-06-22iqk_mul_mat(iq1_bn): WIP NEON - don't see why it is not workingIwan Kawrakow
2024-06-22iqk_mul_mat(iq1_bn): WIP NEON (not working)Iwan Kawrakow
2024-06-22iqk_mul_mat: improve iq1_bn (bitnet) on vanilla AVX2Iwan Kawrakow
2024-06-22iqk_mul_mat: improve iq1_bn (bitnet) on AVX2Iwan Kawrakow
2024-06-22bitnet: scale is per row, not per tensorIwan Kawrakow
2024-06-22iqk_mul_mat: add iq1_bn (bitnet)Iwan Kawrakow
2024-06-22iqk_mul_mat: cleanupIwan Kawrakow
2024-06-22iqk_mul_mat: be independent of llamafile_sgemmIwan Kawrakow
2024-06-22iqk_mul_mat: be independent of llamafile_sgemm (WIP)Iwan Kawrakow
2024-06-22iqk_mul_mat: be able to handle any f16/f32 combination on AVX2Iwan Kawrakow
2024-06-22iqk_mul_mat: turn on AVX512Iwan Kawrakow
2024-06-22iqk_mul_mat: slightly better fp16 with 16 vector registersIwan Kawrakow
2024-06-22iqk_mul_mat: better fp16 for AVX2Iwan Kawrakow
2024-06-22iqk_mul_mat: fp16 for ArmIwan Kawrakow
2024-06-22iqk_mul_mat: slightly faster FANCY_SIMD dot productIwan Kawrakow
2024-06-22iqk_mul_mat: fix q8_0Iwan Kawrakow
2024-06-22iqk_mul_mat: use block_q8_1_x4 also for AVX2Iwan Kawrakow
2024-06-22iqk_mul_mat: use block_q8_0_x4 also for AVX2Iwan Kawrakow
2024-06-22iqk_mul_mat: delete unused stuffIwan Kawrakow