index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
quantize-stats
/
quantize-stats.cpp
Age
Commit message (
Expand
)
Author
2023-08-21
gguf : new file format with flexible meta data (beta) (#2398)
Georgi Gerganov
2023-07-05
ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)
Stephan Walter
2023-06-24
llama : make model stateless and context stateful (llama_state) (#1797)
Didzis Gosko
2023-06-16
build : fix and ignore MSVC warnings (#1889)
Borislav Stanimirov
2023-06-05
ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)
Kawrakow
2023-05-17
Remove unused n_parts parameter (#1509)
Stephan Walter
2023-05-01
Add git-based build information for better issue tracking (#1232)
DannyDaemonic
2023-04-20
llama : multi-threaded quantization (#1075)
Kawrakow
2023-04-17
quantize-stats : fix bug in --type argument
Georgi Gerganov
2023-04-14
Expose type name from ggml (#970)
Pavol Rusnak
2023-04-13
llama : merge llama_internal.h into llama.h
Georgi Gerganov
2023-04-10
Rewrite loading code to try to satisfy everyone:
comex
2023-04-08
Add quantize-stats command for testing quantization (#728)
unbounded