summaryrefslogtreecommitdiff
path: root/examples/quantize/quantize.cpp
AgeCommit message (Expand)Author
2024-07-27Merge mainline llama.cpp (#3)Kawrakow
2024-06-24Bitnet: tiny bity faster 1.625 bpw variant on MetalIwan Kawrakow
2024-06-22bitnet: add 2 bpw quantizationIwan Kawrakow
2024-06-22bitnet: CUDA, scalar, AVX2Iwan Kawrakow
2024-05-22common : normalize naming style (#7462)Georgi Gerganov
2024-05-19quantize : fix --keep-split check (#7374)Fred Douglas
2024-05-08ggml : introduce bfloat16 support (#6412)Justine Tunney
2024-04-26quantize: add imatrix and dataset metadata in GGUF (#6658)Pierrick Hymbert
2024-04-25quantize : add '--keep-split' to quantize model into shards (#6688)jiez
2024-04-03ggml : mul_mat_id use the same tensor for all the experts (#6387)slaren
2024-03-26IQ1_M: 1.75 bpw quantization (#6302)Kawrakow
2024-03-26quantize : be able to override metadata by key (#6321)Kawrakow
2024-03-22quantize: options for output and token embedding tensors qtype (#6239)Kawrakow
2024-02-27IQ4_XS: a 4.25 bpw quantization (#5747)Kawrakow
2024-02-26Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range...Kawrakow
2024-02-24IQ3_S: a much better alternative to Q3_K (#5676)Kawrakow
2024-02-21IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)Kawrakow
2024-02-181.5 bit quantization (#5453)Kawrakow
2024-02-16ggml : add numa options (#5377)bmwl
2024-02-03refactor : switch to emplace_back to avoid extra object (#5291)Michael Klimenko
2024-01-30SOTA 3-bit quants (#5196)Kawrakow
2024-01-30quantize : fix typo (#5211)Vladimir Malyutin
2024-01-22llama : add Q3_K_XS (#5060)Kawrakow
2024-01-14Add ability to use importance matrix for all k-quants (#4930)Kawrakow
2024-01-142-bit quantizations (#4897)Kawrakow
2024-01-11llama : restore intended k-quants mixes for MoE models (#4872)Kawrakow
2023-11-02build : link against build info instead of compiling against it (#3879)cebtenzzre
2023-10-29ggml : quantization refactoring (#3833)Georgi Gerganov
2023-09-28build : enable more non-default compiler warnings (#3200)Cebtenzzre
2023-09-18make : restore build-info.h dependency for several targets (#3205)Cebtenzzre
2023-09-15examples : add compiler version and target to build info (#2998)Cebtenzzre
2023-09-15check C++ code with -Wmissing-declarations (#3184)Cebtenzzre
2023-09-07fix some warnings from gcc and clang-tidy (#3038)Cebtenzzre
2023-09-01Allow quantize to only copy tensors, some other improvements (#2931)Kerfuffle
2023-08-28quantize : make output filename optional again (#2823)Cebtenzzre
2023-08-23Fix values shown in the quantize tool help (#2735)Kawrakow
2023-08-21gguf : new file format with flexible meta data (beta) (#2398)Georgi Gerganov
2023-07-18llama : shorten quantization descriptionsGeorgi Gerganov
2023-07-10mpi : add support for distributed inference via MPI (#2099)Evan Miller
2023-06-26ggml : add NUMA support (#1556)zrm
2023-06-13Allow "quantizing" to f16 and f32 (#1787)Kerfuffle
2023-06-10llama : support requantizing models instead of only allowing quantization fro...Kerfuffle
2023-06-05ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)Kawrakow
2023-05-20llama : add llama_init_backend() API (close #1527)Georgi Gerganov
2023-05-12ggml : remove bit shuffling (#1405)Georgi Gerganov
2023-05-05quantize: make output filename optional, default to ggml-model-<ftype>.bin (#...slaren
2023-05-01Add git-based build information for better issue tracking (#1232)DannyDaemonic
2023-04-28Remove Q4_3 which is no better than Q5 (#1218)Stephan Walter
2023-04-26ggml : add Q5_0 and Q5_1 quantization (#1187)Georgi Gerganov
2023-04-26quantize : use `map` to assign quantization type from `string` (#1191)Pavol Rusnak