index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2024-06-22
bitnet(scale in a separate tensor): CUDA
Iwan Kawrakow
2024-06-22
bitnet: put the scale in a separate tensor
Iwan Kawrakow
2024-06-22
Bitnet(1.75 bpw): higher precision fp8 scale
Iwan Kawrakow
2024-06-22
Bitnet(1.75 bpw): slightly faster CUDA dot product
Iwan Kawrakow
2024-06-22
Bitnet(2.25 bpw): faster Metal dot product
Iwan Kawrakow
2024-06-22
Bitnet(2.25 bpw): Metal
Iwan Kawrakow
2024-06-22
Bitnet(2.25 bpw): CUDA
Iwan Kawrakow
2024-06-22
Bitnet(2.25 bpw): NEON
Iwan Kawrakow
2024-06-22
Bitnet: 2.25 bpw version
Iwan Kawrakow
2024-06-22
bitnet 2 bpw: NEON implementation
Iwan Kawrakow
2024-06-22
Removed extra column
Iwan Kawrakow
2024-06-22
bitnet 2 bpw: AVX2 implementation
Iwan Kawrakow
2024-06-22
bitnet: add 2 bpw quantization
Iwan Kawrakow
2024-06-22
Move Q8_K64 quantization to iqk-quantize.cpp and add copyright notice
Iwan Kawrakow
2024-06-22
iqk_mul_mat(bitnet): fix typo
Iwan Kawrakow
2024-06-22
iqk_mul_mat(bitnet): slightly faster AVX2
Iwan Kawrakow
2024-06-22
iq1_bn: better NEON implementation
Iwan Kawrakow
2024-06-22
iq1_bn(NEON): works now, but very slow
Iwan Kawrakow
2024-06-22
iq1_bn(Metal): 66.2 -> 67.1 t/s
Iwan Kawrakow
2024-06-22
iq1_bn(Metal): 64 -> 66.2 t/s for TG
Iwan Kawrakow
2024-06-22
iq1_bn(Metal): 64 -> 66.2 t/s for TG
Iwan Kawrakow
2024-06-22
iq1_bn(Metal): 60 -> 64 t/s for TG
Iwan Kawrakow
2024-06-22
iq1_bn: very slightly better Metal dot product
Iwan Kawrakow
2024-06-22
iq1_bn: Metal now works
Iwan Kawrakow
2024-06-22
iqk_mul_mat(iq1_bn): WIP NEON - don't see why it is not working
Iwan Kawrakow
2024-06-22
iqk_mul_mat(iq1_bn): WIP NEON (not working)
Iwan Kawrakow
2024-06-22
iqk_mul_mat: improve iq1_bn (bitnet) on vanilla AVX2
Iwan Kawrakow
2024-06-22
iqk_mul_mat: improve iq1_bn (bitnet) on AVX2
Iwan Kawrakow
2024-06-22
bitnet: fix scalar dot product
Iwan Kawrakow
2024-06-22
bitnet: scale is per row, not per tensor
Iwan Kawrakow
2024-06-22
iqk_mul_mat: add iq1_bn (bitnet)
Iwan Kawrakow
2024-06-22
bitnet: CUDA, scalar, AVX2
Iwan Kawrakow
2024-06-22
bitnet: python + llama
Iwan Kawrakow
2024-06-22
iqk_mul_mat: cleanup
Iwan Kawrakow
2024-06-22
iqk_mul_mat: be independent of llamafile_sgemm
Iwan Kawrakow
2024-06-22
iqk_mul_mat: be independent of llamafile_sgemm (WIP)
Iwan Kawrakow
2024-06-22
Fix nb4
Iwan Kawrakow
2024-06-22
iqk_mul_mat: add ability to disable it
Iwan Kawrakow
2024-06-22
iqk_mul_mat: be able to handle any f16/f32 combination on AVX2
Iwan Kawrakow
2024-06-22
iqk_mul_mat: turn on AVX512
Iwan Kawrakow
2024-06-22
iqk_mul_mat: slightly better fp16 with 16 vector registers
Iwan Kawrakow
2024-06-22
iqk_mul_mat: better fp16 for AVX2
Iwan Kawrakow
2024-06-22
iqk_mul_mat: fp16 for Arm
Iwan Kawrakow
2024-06-22
iqk_mul_mat: slightly faster FANCY_SIMD dot product
Iwan Kawrakow
2024-06-22
iqk_mul_mat: fix q8_0
Iwan Kawrakow
2024-06-22
iqk_mul_mat: decouple from llamafile also in cmake
Iwan Kawrakow
2024-06-22
iqk_mul_mat: make it build with the Makefile
Iwan Kawrakow
2024-06-22
iqk_mul_mat: use block_q8_1_x4 also for AVX2
Iwan Kawrakow
2024-06-22
iqk_mul_mat: use block_q8_0_x4 also for AVX2
Iwan Kawrakow
2024-06-22
iqk_mul_mat: delete unused stuff
Iwan Kawrakow
[next]