summaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2024-06-22bitnet(scale in a separate tensor): CUDAIwan Kawrakow
2024-06-22bitnet: put the scale in a separate tensorIwan Kawrakow
2024-06-22Bitnet(1.75 bpw): higher precision fp8 scaleIwan Kawrakow
2024-06-22Bitnet(1.75 bpw): slightly faster CUDA dot productIwan Kawrakow
2024-06-22Bitnet(2.25 bpw): faster Metal dot productIwan Kawrakow
2024-06-22Bitnet(2.25 bpw): MetalIwan Kawrakow
2024-06-22Bitnet(2.25 bpw): CUDAIwan Kawrakow
2024-06-22Bitnet(2.25 bpw): NEONIwan Kawrakow
2024-06-22Bitnet: 2.25 bpw versionIwan Kawrakow
2024-06-22bitnet 2 bpw: NEON implementationIwan Kawrakow
2024-06-22Removed extra columnIwan Kawrakow
2024-06-22bitnet 2 bpw: AVX2 implementationIwan Kawrakow
2024-06-22bitnet: add 2 bpw quantizationIwan Kawrakow
2024-06-22Move Q8_K64 quantization to iqk-quantize.cpp and add copyright noticeIwan Kawrakow
2024-06-22iqk_mul_mat(bitnet): fix typoIwan Kawrakow
2024-06-22iqk_mul_mat(bitnet): slightly faster AVX2Iwan Kawrakow
2024-06-22iq1_bn: better NEON implementationIwan Kawrakow
2024-06-22iq1_bn(NEON): works now, but very slowIwan Kawrakow
2024-06-22iq1_bn(Metal): 66.2 -> 67.1 t/sIwan Kawrakow
2024-06-22iq1_bn(Metal): 64 -> 66.2 t/s for TGIwan Kawrakow
2024-06-22iq1_bn(Metal): 64 -> 66.2 t/s for TGIwan Kawrakow
2024-06-22iq1_bn(Metal): 60 -> 64 t/s for TGIwan Kawrakow
2024-06-22iq1_bn: very slightly better Metal dot productIwan Kawrakow
2024-06-22iq1_bn: Metal now worksIwan Kawrakow
2024-06-22iqk_mul_mat(iq1_bn): WIP NEON - don't see why it is not workingIwan Kawrakow
2024-06-22iqk_mul_mat(iq1_bn): WIP NEON (not working)Iwan Kawrakow
2024-06-22iqk_mul_mat: improve iq1_bn (bitnet) on vanilla AVX2Iwan Kawrakow
2024-06-22iqk_mul_mat: improve iq1_bn (bitnet) on AVX2Iwan Kawrakow
2024-06-22bitnet: fix scalar dot productIwan Kawrakow
2024-06-22bitnet: scale is per row, not per tensorIwan Kawrakow
2024-06-22iqk_mul_mat: add iq1_bn (bitnet)Iwan Kawrakow
2024-06-22bitnet: CUDA, scalar, AVX2Iwan Kawrakow
2024-06-22bitnet: python + llamaIwan Kawrakow
2024-06-22iqk_mul_mat: cleanupIwan Kawrakow
2024-06-22iqk_mul_mat: be independent of llamafile_sgemmIwan Kawrakow
2024-06-22iqk_mul_mat: be independent of llamafile_sgemm (WIP)Iwan Kawrakow
2024-06-22Fix nb4Iwan Kawrakow
2024-06-22iqk_mul_mat: add ability to disable itIwan Kawrakow
2024-06-22iqk_mul_mat: be able to handle any f16/f32 combination on AVX2Iwan Kawrakow
2024-06-22iqk_mul_mat: turn on AVX512Iwan Kawrakow
2024-06-22iqk_mul_mat: slightly better fp16 with 16 vector registersIwan Kawrakow
2024-06-22iqk_mul_mat: better fp16 for AVX2Iwan Kawrakow
2024-06-22iqk_mul_mat: fp16 for ArmIwan Kawrakow
2024-06-22iqk_mul_mat: slightly faster FANCY_SIMD dot productIwan Kawrakow
2024-06-22iqk_mul_mat: fix q8_0Iwan Kawrakow
2024-06-22iqk_mul_mat: decouple from llamafile also in cmakeIwan Kawrakow
2024-06-22iqk_mul_mat: make it build with the MakefileIwan Kawrakow
2024-06-22iqk_mul_mat: use block_q8_1_x4 also for AVX2Iwan Kawrakow
2024-06-22iqk_mul_mat: use block_q8_0_x4 also for AVX2Iwan Kawrakow
2024-06-22iqk_mul_mat: delete unused stuffIwan Kawrakow