From 6b968f38946117552ffed300771c44ba9b39d3e4 Mon Sep 17 00:00:00 2001 From: Kawrakow Date: Fri, 25 Oct 2024 13:08:43 +0200 Subject: Bitnet changes (#106) * Adapting iq2_bn to work without separate scale tensors Why? It is becoming burdensome to maintain the special Bitnet conversion in convert_hf_to_gguf.py, so I thnk it is better to make iq1_bn and iq2_bn just work with the mainline conversion script (which does not generate scales). * Adapting iq1_bn to work without separate scale tensors * Adapting iq2_bn: CUDA dequantize * Adapting iq2_bn: CUDA works * Adapting iq1_bn: CUDA works * Adapting iq1_bn, iq2_bn: NEON * Adapting iq1_bn, iq2_bn: Metal Dequantize works, but there is still something wrong with the dot products. * WIP Absoolutely don't see what is wrong with the iq1_bn and iq2_bn vector dot product kernels. * Remove iq1_tn and iq2_tn - Part 1 Now that iq1_bn and iq2_bn have per row scales, there is no reason to also have iq1_tn and iq2_tn. * Remove iq1_tn and iq2_tn - Part 2 * Bitnet: use the standard llm_build_kv to build self attention My main motivation was to enable FA. But FA does not work anyway because head size is 100 for the Botnet ternary models (and I had forgotten this little detail). * Revert "Avoid rebuild of GGML graph for each token (#98)" This reverts commit f2d315b46f7aacc7df4b86bd8acba387b30e11ca. As far as I can tell, the commit breaks Metal TG. --------- Co-authored-by: Iwan Kawrakow --- include/llama.h | 2 -- 1 file changed, 2 deletions(-) (limited to 'include/llama.h') diff --git a/include/llama.h b/include/llama.h index b2906693..965e5f50 100644 --- a/include/llama.h +++ b/include/llama.h @@ -175,8 +175,6 @@ extern "C" { LLAMA_FTYPE_MOSTLY_IQ4_K = 140, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ5_K = 141, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ6_K = 142, // except 1d tensors - LLAMA_FTYPE_MOSTLY_IQ2_TN = 143, // except 1d tensors - LLAMA_FTYPE_MOSTLY_IQ1_TN = 144, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ4_KS = 145, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ3_KL = 146, // except 1d tensors LLAMA_FTYPE_MOSTLY_IQ2_KS = 147, // except 1d tensors -- cgit v1.2.3