summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-08-05Update README.mdKawrakow
There have been a few minor improvements here and there, so updated the AVX2 Bitnet performance values to current main branch.
2024-08-05iq3_k, iq5_k: faster quantizationIwan Kawrakow
Just use the same trick as iq4_k
2024-08-03iq4_k: speedup quantization by a factor of ~2Iwan Kawrakow
2024-08-01Add copyright noticeIwan Kawrakow
2024-08-01iq2/3_k: tiny bit faster Metal dot productsIwan Kawrakow
2024-08-01iq3_k: slightly faster Metal dequantize kernelIwan Kawrakow
PP-512 goes to 473 t/s up from 452 t/s.
2024-08-01iq3_k: Metal dot productIwan Kawrakow
Quite slow: 43 t/s for a 7B model
2024-08-01iq2_k: Metal dot product finally worksIwan Kawrakow
It is slow: 45.4 t/s for 7B model vs 50 t/s for iq2_xs, or 63.3 t/s for q2_K_S.
2024-08-01iq3_k: Metal dequantizeIwan Kawrakow
2024-08-01iq3_k: NEONIwan Kawrakow
2024-08-01iq3_k: AVX2 iqk_mul_matIwan Kawrakow
We get PP-512 = 196 t/s for LLaMA-3.1-8B on the Ryzen-5975WX.
2024-08-01iq3_k: AVX512 iqk_mul_matIwan Kawrakow
We get PP-512 = 180 t/s, TG-128(4 threads) = 16.35 on the Ryzen-7950X for LLaMA-3.1-8B. In comparison, iq3_s has PP-512 = 96 t/s, TG-128 = 7.6 t/s with iqk_mul_mat, and PP-512 = 28 t/s, TG-128 = 6.8 t/s in mainline llama.cpp
2024-08-01iq3_k: faster CUDA dot productIwan Kawrakow
138 t/s for LLaMA-3.1-8B, which is almost on par with iq3_s.
2024-08-01iq3_k: CUDA dot productIwan Kawrakow
Slightly slower than iq3_s - 132 t/s vs 138 t/s for LLaMA-3.1-8B.
2024-08-01iq3_k: BasicsIwan Kawrakow
Quantize/dequantize, CUDA dequantize. PPL of LLaMA-3.1-8B is better than iq3_s and iq3_m.
2024-08-01iq2_k: very slightly better CUDA dot productIwan Kawrakow
169.2 t/s vs 167.8 t/s before.
2024-08-01iq2_k: better CUDA dot productIwan Kawrakow
Almost on par with iq2_xs (168 t/s vs 172 t/s).
2024-08-01iq2_k: CUDA dot product finally worksIwan Kawrakow
Performance is pathetic: 140 t/s for LLaMA-3.1-8B vs 172 t/s for iq2_xs.
2024-08-01iq5_k: CUDA dot product finally worksIwan Kawrakow
2024-08-01Factor out iqk CUDA dot productsIwan Kawrakow
I cannot possibly wait for a 5 minutes nvcc compilation each time I touch vecdotq.cuh. Also, cmake was adding --options-file X.rsp to the nvcc compile commands, which confuses clangd, so I have turned that off.
2024-08-01iq5_k: CUDA dot product still not workingIwan Kawrakow
2024-08-01iq5_k: MetalIwan Kawrakow
Performance is roughly on par with q5_0.
2024-08-01iq5_k: NEONIwan Kawrakow
2024-08-01iq5_k: AVX512Iwan Kawrakow
2024-08-01iq5_k: AVX2Iwan Kawrakow
2024-08-01iq5_k: BasicsIwan Kawrakow
Quantize/dequantize, CUDA dequantize
2024-08-01iq2_k: Metal. Dot product is wrongIwan Kawrakow
2024-08-01iq2_k: NEONIwan Kawrakow
2024-08-01iq2_k: slightly faster AVX512Iwan Kawrakow
2024-08-01iq2_k: simplify AVX512Iwan Kawrakow
2024-08-01iq2_k: AVX2Iwan Kawrakow
2024-08-01iq2_k: BasicsIwan Kawrakow
Quantize/dequantize, CUDA deqantize, AVX512 iqk_mul_mat.
2024-07-28IQ4_K: SOTA 4-bit quantization (#6)Kawrakow
* iq4_k: basics * quantize/dequantize works * CUDA dequantize works and one can run PPL calcs. I get PPL = 6.5258 for LlaMA-3.1-8B, which is 1.77% above fp16. In comparison, q4_K_S (same size) is 2.88% above fp16. * TG on CUDA does not work. Johannes has changed the way i-quant dot products are done, so need to sort out what he had in mind * iqk_mul_mat is not implemented. * iq4_k: TG now works on CUDA * iq4_k: AVX512 implementation For LLaMA-3.1-8B we get PP-512 = 182.6 t/s, TG-128 = 13.6 t/s, so almost the same as q4_K_S. * iq4_k: AVX2 implementation For LLaMA-3.1-8B we get PP-512 = 203.1 t/s, TG-128 = 12.9 t/s on the Ryzen-5975X. * iq4_k: NEON implementation For LLaMA-3.1-8B we get PP-512 = 60.7 t/s, TG-128 = 25.0 t/s on the M2-Max. TG is on par with q4_K_S, PP is ~10% slower. * iq4_k: Metal implementation For LLaMA-3.1-8B we get PP-512 = 445 t/s, TG-128 = 46.3 t/s on a 30-core M2-Max GPU. This is to be compared with (currently) PP-512 = 460 t/s, TG-128 = 51 t/s for q4_K_S. * iq4_k: scalar dot product --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-27Simdify and multi-thread tanh (#4)Kawrakow
It seemed Gemma-2 performance is lower than expected for its size. Looking at the architecture, I noticed that tanh is used in each layer, and then at the end for softcaping the final output. ggml had tanh set to be computed with a single thread. Combined with tanh(x) being a pretty expensive operation, this resulted in a significant fraction of the time being spent in the tanh operation. After multi-threading ggml_vec_soft_max_f32 and simd-ifying the tanh computation, I observe a 33% gain in prompt processing speed (!!!) TG is of course memory bound, but despite this, we still get a ~2% boost at 4 threads (which gives max TG performance on my Ryzen-7950X). Simd-ifying: We have tanh(x) = (exp(2*x) - 1)/(exp(2*x) + 1) so we can just use Justine Tunney's SIMD exp implementation. Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-27Merge mainline llama.cpp (#3)Kawrakow
* Merging mainline - WIP * Merging mainline - WIP AVX2 and CUDA appear to work. CUDA performance seems slightly (~1-2%) lower as it is so often the case with llama.cpp/ggml after some "improvements" have been made. * Merging mainline - fix Metal * Remove check --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-26Offload Bitnet token embeddings to the GPU - the right way (#2)Kawrakow
OK, I should have checked how it was done for Gemma and do the same for Bitnet. But better late than never. Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-26Offload Bitnet token embeddings to the GPU (#1)Kawrakow
* bitnet: put token embeddings on the GPU * Update README with the new CUDA/Meat performance --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-25iqk_mul_mat(NEON): adding forgotten fp16 matrix x vector implementationIwan Kawrakow
2024-07-24Update README.mdKawrakow
2024-07-24Update README.mdKawrakow
Trying to avoid line breaks in table
2024-07-24Update README.mdKawrakow
2024-07-24Add copyright noticesIwan Kawrakow
Only on the files where I have contributed in a significant way, or the files I wrote myself.
2024-07-24Remove unused fileIwan Kawrakow
2024-07-24Remove securityIwan Kawrakow
2024-07-24Correct spelling in READMEIwan Kawrakow
2024-07-24Update README.mdKawrakow
Adding some more details
2024-07-24Update README.mdKawrakow
Adding MoE and Bitnet performance tables
2024-07-24Update README.mdKawrakow
I hate it when tables look fine in the Preview but then end up with columns split into 2 lines when committed. That's what is happening here, so removed test column from the performance tables.
2024-07-24Update README.mdKawrakow
Added performance comparison tables
2024-07-24iqk_mul_mat(NEON): special case for n not divisible by 8Iwan Kawrakow
Else fp16 PP performance drops by nearly a factor of 2 compared to what we had before.