summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-06-22bitnet(scale in a separate tensor): CUDAIwan Kawrakow
2024-06-22bitnet: put the scale in a separate tensorIwan Kawrakow
and correspondingly add an extra ggml_mul_mat operation. As per @ggerganov, this is how things should be done. It seems to be working, but as far as I can tell this results in a ~15% performance penalty for prompt processing. Commiting so I can go and test on othe platforms.
2024-06-22Bitnet(1.75 bpw): higher precision fp8 scaleIwan Kawrakow
Use 3 bits for the exponent and 5 bits for the mantissa. This makes PPL to be the same as fp16 (but the previous version with 4 bits for the exponent and mantissa was good enough for any practical purposes).
2024-06-22Bitnet(1.75 bpw): slightly faster CUDA dot productIwan Kawrakow
We get 205 t/s, so ~13% slower than 2 bit.
2024-06-22Bitnet(2.25 bpw): faster Metal dot productIwan Kawrakow
With this we get TG-128 = 97 t/s.
2024-06-22Bitnet(2.25 bpw): MetalIwan Kawrakow
We get PP-512 = 702 t/s, TG-128 = 84 t/s. This is almost on par with q4_0, which is rare on Metal (to not say it does not exist). For reference, q4_0 gives 726 t/s / 86 t/s for Bitnet. TG is kind of funny because we hit 72 t/s on the CPU.
2024-06-22Bitnet(2.25 bpw): CUDAIwan Kawrakow
We get PP-512 = 9600 t/s, TG-128 = 234 t/s (but we need to use 8 CPU threads, else results are lower, so clearly there is something being computed on the CPU). PP-512 is very close to PP-512(fp16) = 9800 t/s
2024-06-22Bitnet(2.25 bpw): NEONIwan Kawrakow
We get PP-512 = 192 t/s, TG-128 = 72 t/s
2024-06-22Bitnet: 2.25 bpw versionIwan Kawrakow
Just scaler and AVX2 for now. PP-512 is even faster (325 t/s on the Ryzn-7950X, 404 t/s on Ryzen-5975WX). We lose ~6-7% for TG due to being memory bound and the model being 10% larger.
2024-06-22bitnet 2 bpw: NEON implementationIwan Kawrakow
We get PP-512 = 190 t/s and TG-128 = 75 t/s. 2 bpw TG on the CPU beats 1.75 bpw on the GPU!
2024-06-22Removed extra columnIwan Kawrakow
2024-06-22bitnet 2 bpw: AVX2 implementationIwan Kawrakow
We get PP-512 = 322 t/s. TG is already 51.6 t/s at 4 threads, then it saturates and starts going down for more than 8 threads.
2024-06-22bitnet: add 2 bpw quantizationIwan Kawrakow
The scalar dot product already chieves 37 t/s for TG!
2024-06-22Move Q8_K64 quantization to iqk-quantize.cpp and add copyright noticeIwan Kawrakow
2024-06-22iqk_mul_mat(bitnet): fix typoIwan Kawrakow
With the last change (which added the typo), I'm now getting PP-512 = 300 t/s on the Ryzen-5975WX.
2024-06-22iqk_mul_mat(bitnet): slightly faster AVX2Iwan Kawrakow
We now get 214 t/s on the Ryzen-7950X
2024-06-22iq1_bn: better NEON implementationIwan Kawrakow
PP is decent with 131 t/s (q4_0 has 150 t/s). TG is better than last commit but still bad at 33.1 t/s (in comparison q4_0 gets 52.3 t/s). I had to go to the (0, 1, 2) table. Apple Silicon clearly does not like operations with signs.
2024-06-22iq1_bn(NEON): works now, but very slowIwan Kawrakow
Basically 2X slower tan q4_0.
2024-06-22iq1_bn(Metal): 66.2 -> 67.1 t/sIwan Kawrakow
2024-06-22iq1_bn(Metal): 64 -> 66.2 t/s for TGIwan Kawrakow
This should be good enough. One cannot ask Apple Silicon to do too much work.
2024-06-22iq1_bn(Metal): 64 -> 66.2 t/s for TGIwan Kawrakow
2024-06-22iq1_bn(Metal): 60 -> 64 t/s for TGIwan Kawrakow
2024-06-22iq1_bn: very slightly better Metal dot productIwan Kawrakow
2024-06-22iq1_bn: Metal now worksIwan Kawrakow
PP performance is decent (668 t/s v 724 t/s for q4_0), but TG is kind of low (60 t/s vs 81 t/s for q4_0).
2024-06-22iqk_mul_mat(iq1_bn): WIP NEON - don't see why it is not workingIwan Kawrakow
2024-06-22iqk_mul_mat(iq1_bn): WIP NEON (not working)Iwan Kawrakow
2024-06-22iqk_mul_mat: improve iq1_bn (bitnet) on vanilla AVX2Iwan Kawrakow
I now get PP-512 = 270 t/s on the Ryzen-5975WX
2024-06-22iqk_mul_mat: improve iq1_bn (bitnet) on AVX2Iwan Kawrakow
We now get 207 t/s for PP-512 and 51 t/s for TG-128 using 16 threads.
2024-06-22bitnet: fix scalar dot productIwan Kawrakow
I had forgotten to adjust for the change to q8_K64. On the M2 I'm getting 10.8 t/s with the scalar version!
2024-06-22bitnet: scale is per row, not per tensorIwan Kawrakow
2024-06-22iqk_mul_mat: add iq1_bn (bitnet)Iwan Kawrakow
We get 174 t/s for PP-512 and 49 t/s for TG-128 using 16 threads.
2024-06-22bitnet: CUDA, scalar, AVX2Iwan Kawrakow
2024-06-22bitnet: python + llamaIwan Kawrakow
2024-06-22iqk_mul_mat: cleanupIwan Kawrakow
2024-06-22iqk_mul_mat: be independent of llamafile_sgemmIwan Kawrakow
Verified that it works on AVX2. Also turned on any combination of f16 and f32 (i.e., added f16 x 16 and f32 x f32).
2024-06-22iqk_mul_mat: be independent of llamafile_sgemm (WIP)Iwan Kawrakow
* Remove iqk_mul_mat from llamafile_sgemm * Pass tensor types and strides to iqk_mul_mat It is marked WIP because only tested on __aarch64__
2024-06-22Fix nb4Iwan Kawrakow
2024-06-22iqk_mul_mat: add ability to disable itIwan Kawrakow
2024-06-22iqk_mul_mat: be able to handle any f16/f32 combination on AVX2Iwan Kawrakow
But only turning on f16 x f32 and f32 x f16 for now.
2024-06-22iqk_mul_mat: turn on AVX512Iwan Kawrakow
It makes no difference on my Ryzen-7950X, but perhaps it will be beneficial for CPU's with real AVX512.
2024-06-22iqk_mul_mat: slightly better fp16 with 16 vector registersIwan Kawrakow
2x6 (Nx x Ny) tiles instead of 3x4. We get 142.7 t/s on the Ryzen-5975WX up from 138 t/s. We use Nx registers to preload the fp16 weights, so total registers required is Nx * (Ny + 1), so 15 in the case of of 3 x 4 tiles and 14 for 2 x 6 tiles. I guess, the one spare register helps. But maybe it is just a matter of how things get loaded into the cache. On the 7950X I did try 3 x 8 and it did not perform as well as 5 x 5.
2024-06-22iqk_mul_mat: better fp16 for AVX2Iwan Kawrakow
Basically use what I did for Arm. Improves PP performance to 141.7 t/s up from 136 t/s on the Ryzen-7950X (32 vector registers, so we use 5x5 tiling). This is now 10% faster than tinyBLAS. There is a minor improvement also on the Ryzen-5975WX (16 vector registers, so we use 4x3 tiling): we get 138 t/s up from 136 t/s. tinyBLAS is at 132 t/s.
2024-06-22iqk_mul_mat: fp16 for ArmIwan Kawrakow
~2% slower than tinyBLAS - not sure why.
2024-06-22iqk_mul_mat: slightly faster FANCY_SIMD dot productIwan Kawrakow
About 2% faster for q4_K.
2024-06-22iqk_mul_mat: fix q8_0Iwan Kawrakow
I was happily using _mm256_packs_epi32() to pack the q8_0 x q8_0 dot products back to int16_t, and getting useful results. But theoretically this can overflow, so it is better to use _mm256_unpacklo_ and _mm256_unpackhi_ to combine the 4 dot products using int32_t additions. This is (almost) as fast, unlike _mm256_hadd_epi32(), which seems excessively slow on the Ryzen-7950X.
2024-06-22iqk_mul_mat: decouple from llamafile also in cmakeIwan Kawrakow
2024-06-22iqk_mul_mat: make it build with the MakefileIwan Kawrakow
2024-06-22iqk_mul_mat: use block_q8_1_x4 also for AVX2Iwan Kawrakow
Here the performance gain is more significant. E.g., for q4_1, PP-512 becomes 168 t/s up from 137 t/s. Now the performance gap to q4_0 is so significant that I wonder if I should change to using Q8_1 also for the qX_0 legacy quants.
2024-06-22iqk_mul_mat: use block_q8_0_x4 also for AVX2Iwan Kawrakow
2024-06-22iqk_mul_mat: delete unused stuffIwan Kawrakow