summaryrefslogtreecommitdiff
path: root/llama.h
AgeCommit message (Expand)Author
2023-12-16lora : add support for non-llama models (#3333)slaren
2023-12-12llama : document logits_all deprecation (#4418)crasm
2023-12-07llama : per-layer KV cache + quantum K cache (#4309)Georgi Gerganov
2023-12-05llama : allow overriding GGUF metadata when loading model (#4092)Kerfuffle
2023-11-25Update docs for yarn_ext_factor <0.0 as unspecified instead of NaN (#4189)crasm
2023-11-23llama : KV cache view API + better KV cache management (#4170)Georgi Gerganov
2023-11-17llama : add functions to get the model's metadata (#4013)slaren
2023-11-16Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)Kerfuffle
2023-11-03common : YAYF (yet another YARN fix) (#3925)Georgi Gerganov
2023-11-01llama : implement YaRN RoPE scaling (#2268)cebtenzzre
2023-10-31samplers : Min-P sampler implementation [alternative to Top P/Top K] (#3841)kalomaze
2023-10-29Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)Kerfuffle
2023-10-29ggml : quantization refactoring (#3833)Georgi Gerganov
2023-10-28llama : add option for greedy sampling with probs (#3813)Georgi Gerganov
2023-10-27cuda : improve text-generation and batched decoding performance (#3776)Georgi Gerganov
2023-10-23llama : remove token functions with `context` args in favor of `model` (#3720)Marcus Dunn
2023-10-20sampling : refactor init to use llama_sampling_params (#3696)Georgi Gerganov
2023-10-18speculative : add tree-based sampling example (#3624)Georgi Gerganov
2023-10-17tokenizer : special token handling (#3538)staviq
2023-10-03llama : fix session saving/loading (#3400)Georgi Gerganov
2023-10-03llama : expose model's rope_freq_scale in the API (#3418)Alex Klinkhamer
2023-10-02infill : add new example + extend server API (#3296)vvhg1
2023-09-29llama.cpp : add documentation about rope_freq_base and scale values (#3401)slaren
2023-09-28llama.cpp : split llama_context_params into model and context params (#3301)slaren
2023-09-28train : finetune LORA (#2632)xaedes
2023-09-28llama : custom attention mask + parallel decoding + no context swaps (#3228)Georgi Gerganov
2023-09-27metal : reusing llama.cpp logging (#3152)Rickard Hallerbäck
2023-09-16Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 (...goerch
2023-09-15check C++ code with -Wmissing-declarations (#3184)Cebtenzzre
2023-09-08examples : make n_ctx warning work again (#3066)Cebtenzzre
2023-09-05speculative : add grammar support (#2991)Georgi Gerganov
2023-09-01Allow quantize to only copy tensors, some other improvements (#2931)Kerfuffle
2023-08-29added `struct` to llama_dump_timing_info_yaml's `llama_context` (#2857)Marcus Dunn
2023-08-28YAML result logging + preset script (#2657)Johannes Gäßler
2023-08-28llama.h : add missing struct keyword for C compat in callback type (#2847)igarnier
2023-08-27llama : more tokenizer fixes (#2810)Georgi Gerganov
2023-08-25llama : fix struct decl (#2790)Marcus Dunn
2023-08-25llama : add llama_beam_search() (#2267)Matt Pulver
2023-08-25llama-bench : add model sizes (#2771)slaren
2023-08-24Added `enum` to `llama_token_get_type` return type (#2774)Marcus Dunn
2023-08-23llm : add Falcon support (#2717)Georgi Gerganov
2023-08-22gguf : add ftype meta info to the model (#2710)Georgi Gerganov
2023-08-21gguf : new file format with flexible meta data (beta) (#2398)Georgi Gerganov
2023-08-18llama : add benchmark example (#2626)slaren
2023-08-14llama : add missing enum keyword in function signatures (#2610)Kamil Tomšík
2023-08-09add log_callback to llama_context_params for custom logging. (#2234)grahameth
2023-07-31CUDA: mmq CLI option, fixed mmq build issues (#2453)Johannes Gäßler
2023-07-25Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384)Kawrakow
2023-07-24make rms_norm_eps a parameter (#2374)slaren
2023-07-23llama : add grammar-based sampling (#1773)Evan Jones