summaryrefslogtreecommitdiff
path: root/examples/main/main.cpp
AgeCommit message (Expand)Author
2024-03-11llama : more consistent names of count variables (#5994)Georgi Gerganov
2024-03-04main : support special tokens as reverse/anti prompt (#5847)DAN™
2024-02-25llama : refactor k-shift implementation + KV defragmentation (#5691)Georgi Gerganov
2024-02-21examples : do not assume BOS when shifting context (#5622)Jared Van Bortel
2024-02-16ggml : add numa options (#5377)bmwl
2024-02-11main : ctrl+C print timing in non-interactive mode (#3873)Georgi Gerganov
2024-02-03refactor : switch to emplace_back to avoid extra object (#5291)Michael Klimenko
2024-01-30main : allow empty --prompt-cache file (#5176)divinity76
2024-01-13main : add parameter --no-display-prompt (#4541)Yann Follet
2024-01-11main : better name for variable n_print (#4874)Georgi Gerganov
2024-01-11main : disable token count by default (#4874)Georgi Gerganov
2024-01-11main : print total token count and tokens consumed so far (#4874)pudepiedj
2024-01-08main : add self-extend support (#4815)Georgi Gerganov
2023-12-05sampling : custom samplers order (#4285)MaggotHATE
2023-11-30main : pass LOG_TEE callback to llama.cpp log (#4033)Andrew Godfrey
2023-11-20main : Add ChatML functionality to main example (#4046)Seb C
2023-11-16Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)Kerfuffle
2023-11-02build : link against build info instead of compiling against it (#3879)cebtenzzre
2023-10-29Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)Kerfuffle
2023-10-23llama : remove token functions with `context` args in favor of `model` (#3720)Marcus Dunn
2023-10-22main : escape prompt for cfg_negative_prompt and consecutive inputs in main w...vvhg1
2023-10-20sampling : refactor init to use llama_sampling_params (#3696)Georgi Gerganov
2023-10-18speculative : add tree-based sampling example (#3624)Georgi Gerganov
2023-10-17llama : avoid fprintf in favor of LLAMA_LOG (#3538)Georgi Gerganov
2023-10-17tokenizer : special token handling (#3538)staviq
2023-10-11main : fix session loading bug (#3400)Georgi Gerganov
2023-10-11common : fix mirostat state when using multiple sequences (#3543)Kerfuffle
2023-10-03main : consistent prefix/suffix coloring (#3425)h-h-h-h
2023-10-03llama : fix session saving/loading (#3400)Georgi Gerganov
2023-09-28build : enable more non-default compiler warnings (#3200)Cebtenzzre
2023-09-28llama.cpp : split llama_context_params into model and context params (#3301)slaren
2023-09-28llama : custom attention mask + parallel decoding + no context swaps (#3228)Georgi Gerganov
2023-09-15examples : add compiler version and target to build info (#2998)Cebtenzzre
2023-09-15check C++ code with -Wmissing-declarations (#3184)Cebtenzzre
2023-09-15llama : remove mtest (#3177)Roland
2023-09-08examples : make n_ctx warning work again (#3066)Cebtenzzre
2023-09-08build : do not use _GNU_SOURCE gratuitously (#2035)Przemysław Pawełczyk
2023-09-07fix some warnings from gcc and clang-tidy (#3038)Cebtenzzre
2023-09-04build : on Mac OS enable Metal by default (#2901)Georgi Gerganov
2023-09-03speculative : PoC for speeding-up inference via speculative sampling (#2926)Georgi Gerganov
2023-09-03perplexity : fix ETA by warming up the model with an empty runGeorgi Gerganov
2023-08-30main : log file (#2748)staviq
2023-08-28YAML result logging + preset script (#2657)Johannes Gäßler
2023-08-27llama : more tokenizer fixes (#2810)Georgi Gerganov
2023-08-26main : fix bug (penalize_nl=false doesn't work) + suppress warning on mingw (...Dr. Tom Murphy VII Ph.D
2023-08-26Fix spm whitespaces (#2806)klosax
2023-08-24Fix for main example getting stuck when -n -2 and --interactive (#2767)Kerfuffle
2023-08-23llm : add Falcon support (#2717)Georgi Gerganov
2023-08-23main : insert bos if no tokens (#2727)klosax
2023-08-21gguf : new file format with flexible meta data (beta) (#2398)Georgi Gerganov