index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml-quants.c
Age
Commit message (
Expand
)
Author
2024-06-22
bitnet: add 2 bpw quantization
Iwan Kawrakow
2024-06-22
Move Q8_K64 quantization to iqk-quantize.cpp and add copyright notice
Iwan Kawrakow
2024-06-22
iqk_mul_mat: improve iq1_bn (bitnet) on AVX2
Iwan Kawrakow
2024-06-22
bitnet: scale is per row, not per tensor
Iwan Kawrakow
2024-06-22
bitnet: CUDA, scalar, AVX2
Iwan Kawrakow
2024-06-22
Fix nb4
Iwan Kawrakow
2024-06-22
iqk_mul_mat: add ability to disable it
Iwan Kawrakow
2024-06-22
iqk_mul_mat: use block_q8_1_x4 also for AVX2
Iwan Kawrakow
2024-06-22
iqk_mul_mat: use block_q8_0_x4 also for AVX2
Iwan Kawrakow
2024-06-22
iqk_mul_mat for llama.cpp
Iwan Kawrakow
2024-06-21
ggml : AVX IQ quants (#7845)
Eve
2024-06-16
ggml : fix and optimize ppc64le (ggml/849)
Hong Bo PENG
2024-06-16
ggml : remove duplicate include of ggml-common.h (ggml/853)
Daniel Bevenius
2024-06-16
ggml : fix handling of zero blocks in IQ quants (#7955)
Georgi Gerganov
2024-05-31
ggml : fix loongson compile warnings (#7537)
Georgi Gerganov
2024-05-30
ggml : fix loongarch build (O2 issue) (#7636)
junchao-loongson
2024-05-25
ggml: aarch64: SVE kernels for q8_0_q8_0, q4_0_q8_0 vector dot (#7433)
Masaya, Kato
2024-05-23
ggml : silence UB sanitizer error during iq2_xxs quantization (#0)
Georgi Gerganov
2024-05-23
ggml : drop support for QK_K=64 (#7473)
Georgi Gerganov
2024-05-20
ggml : add loongarch lsx and lasx support (#6454)
junchao-loongson
2024-05-19
ggml : fix another case of quants nans (#7387)
slaren
2024-05-18
ggml : fix quants nans when all the group weights are very close to zero (#7313)
slaren
2024-05-17
ggml-quants, llama : removed excess checks (#7274)
Herman Semenov
2024-05-16
Add support for properly optimized Windows ARM64 builds with LLVM and MSVC (#...
Max Krasnyansky
2024-05-14
ggml : try fix ppc64 (whisper/0)
Georgi Gerganov
2024-05-14
ggml : optimize for ppc64le using VSX intrinsics (ggml/784)
Hong Bo PENG
2024-05-11
build: fix and ignore msvc warnings (ggml/805)
Borislav Stanimirov
2024-05-08
ggml : introduce bfloat16 support (#6412)
Justine Tunney
2024-04-26
add basic tensor data validation function (#6884)
slaren
2024-04-25
ggml : fix MIN / MAX macros (#6904)
Georgi Gerganov
2024-04-24
ggml : move 32-bit arm compat in ggml-impl.h (#6865)
Georgi Gerganov
2024-04-16
ggml : add llamafile sgemm (#6414)
Justine Tunney
2024-04-09
llama : add Command R Plus support (#6491)
Carolinabanana
2024-03-27
Make IQ1_M work for QK_K = 64 (#6327)
Kawrakow
2024-03-26
IQ1_M: 1.75 bpw quantization (#6302)
Kawrakow
2024-03-25
ggml : support AVX512VNNI (#6280)
Justine Tunney
2024-03-21
ggml : same IQ4_NL quantization for CPU/CUDA/Metal (#6196)
Kawrakow
2024-03-12
ggml : reuse quantum structs across backends (#5943)
Georgi Gerganov
2024-03-12
ggml : fix UB in IQ2_S and IQ3_S (#6012)
Georgi Gerganov
2024-03-11
1.5 bit: we can do even better (#5999)
Kawrakow
2024-03-11
ggml, ci : Windows ARM runner and build fixes (#5979)
Michael Podvitskiy
2024-03-11
Better 1.5 bit quantization (#5971)
Kawrakow
2024-03-10
ggml : try fix 32-bit arm compat (whisper/1938)
Georgi Gerganov
2024-03-09
ggml : fix unnecessary f32 -> f16 -> f32 casts (mmla) (#5951)
Georgi Gerganov
2024-03-09
ggml : remove old quantization functions (#5942)
Georgi Gerganov
2024-03-09
ggml : add ggml-common.h to deduplicate shared code (#5940)
Georgi Gerganov
2024-03-06
ggml : use `uint8x16_t` return type for `ggml_vqtbl1q_u8` (#5894)
bobqianic
2024-03-05
quants : use MM256_SET_M128I consistently to fix gcc 7 build (#5889)
Jared Van Bortel
2024-03-02
ggml : fix IQ3_S AVX implementation (#5834)
Georgi Gerganov
2024-03-02
ggml : IQ3_S improvements (#5829)
Kawrakow
[next]