summaryrefslogtreecommitdiff
path: root/examples/perplexity/perplexity.cpp
AgeCommit message (Expand)Author
2024-02-03refactor : switch to emplace_back to avoid extra object (#5291)Michael Klimenko
2024-02-02perplexity : fix KL divergence calculations on Windows (#5273)kalomaze
2024-01-23Additional KL-divergence statistics (#5081)Kawrakow
2024-01-23minor : clean-up some warnings and style (#5094)Georgi Gerganov
2024-01-22KL-divergence (#5076)Kawrakow
2024-01-21Add ability to evauate multiple choice tasks (#5047)Kawrakow
2024-01-20perplexity : fix MSVC build after #5020 (#5043)Jared Van Bortel
2024-01-19winogrande: evaluate log-probs in parallel (#5036)Kawrakow
2024-01-19perplexity: avoid unnecessary alloocations and logit copies (#5035)Kawrakow
2024-01-19perplexity : faster Winogrande via batching (#5024)Georgi Gerganov
2024-01-18perplexity : fix winogrande N tasks optionGeorgi Gerganov
2024-01-18HellaSwag: speed up by parallelizing log-prob evaluation (#5020)Kawrakow
2024-01-18perplexity : faster HellaSwag via batching (#5017)Georgi Gerganov
2024-01-18Add Winogrande evaluation (#5015)Kawrakow
2024-01-16perplexity : fix kv cache handling for hellaswag (#4981)Georgi Gerganov
2023-11-16Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)Kerfuffle
2023-11-02build : link against build info instead of compiling against it (#3879)cebtenzzre
2023-10-29Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)Kerfuffle
2023-10-23llama : remove token functions with `context` args in favor of `model` (#3720)Marcus Dunn
2023-09-28llama.cpp : split llama_context_params into model and context params (#3301)slaren
2023-09-28llama : custom attention mask + parallel decoding + no context swaps (#3228)Georgi Gerganov
2023-09-18make : restore build-info.h dependency for several targets (#3205)Cebtenzzre
2023-09-15examples : add compiler version and target to build info (#2998)Cebtenzzre
2023-09-15check C++ code with -Wmissing-declarations (#3184)Cebtenzzre
2023-09-08examples : make n_ctx warning work again (#3066)Cebtenzzre
2023-09-07fix some warnings from gcc and clang-tidy (#3038)Cebtenzzre
2023-09-04build : on Mac OS enable Metal by default (#2901)Georgi Gerganov
2023-08-29Tell users attmepting to run perplexity with too few tokens to use more (#2882)Kawrakow
2023-08-28YAML result logging + preset script (#2657)Johannes Gäßler
2023-08-27llama : speedup tokenization (#2831)Kawrakow
2023-08-27llama : more tokenizer fixes (#2810)Georgi Gerganov
2023-08-26Fix HellaSwag (#2805)Kawrakow
2023-08-25Faster perplexity computation (#2786)Kawrakow
2023-08-23llm : add Falcon support (#2717)Georgi Gerganov
2023-08-23Strided perplexity (#2714)Kawrakow
2023-08-21gguf : new file format with flexible meta data (beta) (#2398)Georgi Gerganov
2023-08-21HellaSwag: split token evaluation into batches if needed (#2681)Kawrakow
2023-08-20More efficient Hellaswag implementation (#2677)Kawrakow
2023-08-18perplexity : more meaningful ETA number - 2 decimal pointsGeorgi Gerganov
2023-08-04build : fix several cast and printf warnings (#2499)Borislav Stanimirov
2023-07-28perplexity : add Hellaswag calculation (#2389)klosax
2023-07-22Perplexity: Compute scores correlated to HellaSwag (#2312)klosax
2023-07-18ci : integrate with ggml-org/ci (#2250)Georgi Gerganov
2023-07-10mpi : add support for distributed inference via MPI (#2099)Evan Miller
2023-07-06convert : update for baichuan (#2081)Judd
2023-06-29Use unsigned for random seed (#2006)Howard Su
2023-06-26ggml : add NUMA support (#1556)zrm
2023-06-24llama : make model stateless and context stateful (llama_state) (#1797)Didzis Gosko
2023-06-16build : fix and ignore MSVC warnings (#1889)Borislav Stanimirov
2023-05-20llama : add llama_init_backend() API (close #1527)Georgi Gerganov