index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
perplexity
/
perplexity.cpp
Age
Commit message (
Expand
)
Author
2024-02-03
refactor : switch to emplace_back to avoid extra object (#5291)
Michael Klimenko
2024-02-02
perplexity : fix KL divergence calculations on Windows (#5273)
kalomaze
2024-01-23
Additional KL-divergence statistics (#5081)
Kawrakow
2024-01-23
minor : clean-up some warnings and style (#5094)
Georgi Gerganov
2024-01-22
KL-divergence (#5076)
Kawrakow
2024-01-21
Add ability to evauate multiple choice tasks (#5047)
Kawrakow
2024-01-20
perplexity : fix MSVC build after #5020 (#5043)
Jared Van Bortel
2024-01-19
winogrande: evaluate log-probs in parallel (#5036)
Kawrakow
2024-01-19
perplexity: avoid unnecessary alloocations and logit copies (#5035)
Kawrakow
2024-01-19
perplexity : faster Winogrande via batching (#5024)
Georgi Gerganov
2024-01-18
perplexity : fix winogrande N tasks option
Georgi Gerganov
2024-01-18
HellaSwag: speed up by parallelizing log-prob evaluation (#5020)
Kawrakow
2024-01-18
perplexity : faster HellaSwag via batching (#5017)
Georgi Gerganov
2024-01-18
Add Winogrande evaluation (#5015)
Kawrakow
2024-01-16
perplexity : fix kv cache handling for hellaswag (#4981)
Georgi Gerganov
2023-11-16
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
Kerfuffle
2023-11-02
build : link against build info instead of compiling against it (#3879)
cebtenzzre
2023-10-29
Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)
Kerfuffle
2023-10-23
llama : remove token functions with `context` args in favor of `model` (#3720)
Marcus Dunn
2023-09-28
llama.cpp : split llama_context_params into model and context params (#3301)
slaren
2023-09-28
llama : custom attention mask + parallel decoding + no context swaps (#3228)
Georgi Gerganov
2023-09-18
make : restore build-info.h dependency for several targets (#3205)
Cebtenzzre
2023-09-15
examples : add compiler version and target to build info (#2998)
Cebtenzzre
2023-09-15
check C++ code with -Wmissing-declarations (#3184)
Cebtenzzre
2023-09-08
examples : make n_ctx warning work again (#3066)
Cebtenzzre
2023-09-07
fix some warnings from gcc and clang-tidy (#3038)
Cebtenzzre
2023-09-04
build : on Mac OS enable Metal by default (#2901)
Georgi Gerganov
2023-08-29
Tell users attmepting to run perplexity with too few tokens to use more (#2882)
Kawrakow
2023-08-28
YAML result logging + preset script (#2657)
Johannes Gäßler
2023-08-27
llama : speedup tokenization (#2831)
Kawrakow
2023-08-27
llama : more tokenizer fixes (#2810)
Georgi Gerganov
2023-08-26
Fix HellaSwag (#2805)
Kawrakow
2023-08-25
Faster perplexity computation (#2786)
Kawrakow
2023-08-23
llm : add Falcon support (#2717)
Georgi Gerganov
2023-08-23
Strided perplexity (#2714)
Kawrakow
2023-08-21
gguf : new file format with flexible meta data (beta) (#2398)
Georgi Gerganov
2023-08-21
HellaSwag: split token evaluation into batches if needed (#2681)
Kawrakow
2023-08-20
More efficient Hellaswag implementation (#2677)
Kawrakow
2023-08-18
perplexity : more meaningful ETA number - 2 decimal points
Georgi Gerganov
2023-08-04
build : fix several cast and printf warnings (#2499)
Borislav Stanimirov
2023-07-28
perplexity : add Hellaswag calculation (#2389)
klosax
2023-07-22
Perplexity: Compute scores correlated to HellaSwag (#2312)
klosax
2023-07-18
ci : integrate with ggml-org/ci (#2250)
Georgi Gerganov
2023-07-10
mpi : add support for distributed inference via MPI (#2099)
Evan Miller
2023-07-06
convert : update for baichuan (#2081)
Judd
2023-06-29
Use unsigned for random seed (#2006)
Howard Su
2023-06-26
ggml : add NUMA support (#1556)
zrm
2023-06-24
llama : make model stateless and context stateful (llama_state) (#1797)
Didzis Gosko
2023-06-16
build : fix and ignore MSVC warnings (#1889)
Borislav Stanimirov
2023-05-20
llama : add llama_init_backend() API (close #1527)
Georgi Gerganov
[next]