summaryrefslogtreecommitdiff
path: root/common/common.cpp
AgeCommit message (Expand)Author
2023-12-21common : remove incorrect --model-draft default (#4568)Jared Van Bortel
2023-12-13common : add `--version` option to show build info in CLI (#4433)Siwen Yu
2023-12-07llama : per-layer KV cache + quantum K cache (#4309)Georgi Gerganov
2023-12-05llama : allow overriding GGUF metadata when loading model (#4092)Kerfuffle
2023-12-05sampling : custom samplers order (#4285)MaggotHATE
2023-11-23llama : KV cache view API + better KV cache management (#4170)Georgi Gerganov
2023-11-20main : Add ChatML functionality to main example (#4046)Seb C
2023-11-19common : comma should be semicolon (#4137)kchro3
2023-11-17common : improve yaml log escaping (#4080)Jannis Schönleber
2023-11-16Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)Kerfuffle
2023-11-05ggml-cuda : fix f16 mul mat (#3961)slaren
2023-11-05Allow common process_escapes to handle \x sequences (#3928)Kerfuffle
2023-11-03speculative : change default p_accept to 0.5 + CLI args (#3919)Georgi Gerganov
2023-11-02build : link against build info instead of compiling against it (#3879)cebtenzzre
2023-11-01llama : implement YaRN RoPE scaling (#2268)cebtenzzre
2023-11-01common : minor (#3715)Georgi Gerganov
2023-11-01common : allow caller to handle help/argument exceptions (#3715)bandoti
2023-10-31samplers : Min-P sampler implementation [alternative to Top P/Top K] (#3841)kalomaze
2023-10-29Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)Kerfuffle
2023-10-28llama : add option for greedy sampling with probs (#3813)Georgi Gerganov
2023-10-28common : print that one line of the syntax help *also* to standard output (#3...Henk Poley
2023-10-23llama : remove token functions with `context` args in favor of `model` (#3720)Marcus Dunn
2023-10-22main : escape prompt for cfg_negative_prompt and consecutive inputs in main w...vvhg1
2023-10-20sampling : refactor init to use llama_sampling_params (#3696)Georgi Gerganov
2023-10-18speculative : add tree-based sampling example (#3624)Georgi Gerganov
2023-10-17tokenizer : special token handling (#3538)staviq
2023-10-12examples: support LLaVA v1.5 (multimodal model) (#3436)M. Yusuf Sarıgöz
2023-10-11common : fix mirostat state when using multiple sequences (#3543)Kerfuffle
2023-10-07Fix trying to strip newline from empty prompt and cfg prompt file content (#3...Kerfuffle
2023-10-06parallel : add option to load external prompt file (#3416)pudepiedj
2023-10-06server : reuse llama_sample_token common util (#3494)Jhen-Jie Hong
2023-10-05build : use std::make_tuple() for compatibility with older GCC versions (#3488)Kenvix ⭐
2023-10-05common : process escape sequences in reverse prompts (#3461)staviq
2023-10-03Work on the BPE tokenizer (#3252)goerch
2023-10-02infill : add new example + extend server API (#3296)vvhg1
2023-09-28build : enable more non-default compiler warnings (#3200)Cebtenzzre
2023-09-28llama.cpp : split llama_context_params into model and context params (#3301)slaren
2023-09-28train : finetune LORA (#2632)xaedes
2023-09-28llama : custom attention mask + parallel decoding + no context swaps (#3228)Georgi Gerganov
2023-09-20llama : allow gguf RoPE keys to be overridden with defaults (#3240)Cebtenzzre
2023-09-16Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 (...goerch
2023-09-15check C++ code with -Wmissing-declarations (#3184)Cebtenzzre
2023-09-15llama : remove mtest (#3177)Roland
2023-09-13speculative: add --n-gpu-layers-draft option (#3063)FK
2023-09-07fix some warnings from gcc and clang-tidy (#3038)Cebtenzzre
2023-09-07metal : fix kernel_norm (fixes Falcon on Metal) (#3057)Georgi Gerganov
2023-09-05examples : replace fprintf to stdout with printf (#3017)Cebtenzzre
2023-09-05speculative : add grammar support (#2991)Georgi Gerganov
2023-09-04build : on Mac OS enable Metal by default (#2901)Georgi Gerganov
2023-09-03speculative : PoC for speeding-up inference via speculative sampling (#2926)Georgi Gerganov