summaryrefslogtreecommitdiff
path: root/llama.h
AgeCommit message (Expand)Author
2024-05-14ggml : add RPC backend (#6829)Radoslav Gerganov
2024-05-08llama : add BPE pre-tokenization for Qwen2 (#7114)Ren Xuancheng
2024-05-08convert : add BPE pre-tokenization for DBRX (#7132)DAN™
2024-05-08ggml : introduce bfloat16 support (#6412)Justine Tunney
2024-05-07Fix OLMo HF to GGUF conversion (#6910)nopperl
2024-05-05command-r : add BPE pre-tokenization (#7063)DAN™
2024-05-04tests : add test-tokenizer-0.sh + fix some tokenizers (#7036)Georgi Gerganov
2024-05-03llama : rename ctx to user_data in progress_callback (#7045)Daniel Bevenius
2024-04-30ggml : add Flash Attention (#5021)Georgi Gerganov
2024-04-29llama : fix BPE pre-tokenization (#6920)Georgi Gerganov
2024-04-26quantize: add imatrix and dataset metadata in GGUF (#6658)Pierrick Hymbert
2024-04-26add basic tensor data validation function (#6884)slaren
2024-04-25quantize : add '--keep-split' to quantize model into shards (#6688)jiez
2024-04-24llama : add llama_get_pooling_type function (#6862)Douglas Hanley
2024-04-24Server: fix seed for multiple slots (#6835)Johannes Gäßler
2024-04-21llama : add option to render special/control tokens (#6807)Georgi Gerganov
2024-04-21llama : support Llama 3 HF conversion (#6745)Pedro Cuenca
2024-04-11grammars: 1.5x faster inference w/ complex grammars (vector reserves / reuses...Olivier Chafik
2024-04-09BERT tokenizer fixes (#6498)Jared Van Bortel
2024-04-08llama : support negative ith in llama_get_ API (#6519)Rick G
2024-04-08llama : save and restore kv cache for single seq id (#6341)Jan Boon
2024-04-04examples : add GBNF validator program (#5948)Clint Herron
2024-03-28convert : refactor vocab selection logic (#6355)Jared Van Bortel
2024-03-26llama : greatly reduce output buffer memory usage (#6122)compilade
2024-03-26IQ1_M: 1.75 bpw quantization (#6302)Kawrakow
2024-03-26quantize : be able to override metadata by key (#6321)Kawrakow
2024-03-22quantize: options for output and token embedding tensors qtype (#6239)Kawrakow
2024-03-22llama_model_loader: support multiple split/shard GGUFs (#6187)Pierrick Hymbert
2024-03-15llama : add support for control vectors (#5970)Theia Vogel
2024-03-14llama : support models without vocabulary (#5798)Michael Podvitskiy
2024-03-13llama : add pipeline parallelism support (#6017)slaren
2024-03-11llama : more consistent names of count variables (#5994)Georgi Gerganov
2024-03-11llama : fix F16/F32 downcast + improve names (#5980)Georgi Gerganov
2024-03-10llama : add support for GritLM (#5959)DAN™
2024-03-08llama : support Mamba Selective State Space Models (#5328)compilade
2024-03-04llama : fix embeddings (#5796)Georgi Gerganov
2024-03-03llama : allow for user specified embedding pooling type (#5849)Douglas Hanley
2024-03-02llama : add abort_callback to interrupt computation (#5409)Michael Podvitskiy
2024-03-01llama : cleanup unused mmq flags (#5772)Pierrick Hymbert
2024-02-29llama : constified `llama_set_state_data`'s `src` (#5774)Marcus Dunn
2024-02-28llama : remove deprecated API (#5770)Georgi Gerganov
2024-02-27IQ4_XS: a 4.25 bpw quantization (#5747)Kawrakow
2024-02-27llama : fix defrag bugs + add parameter (#5735)Georgi Gerganov
2024-02-26Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range...Kawrakow
2024-02-25llama : refactor k-shift implementation + KV defragmentation (#5691)Georgi Gerganov
2024-02-25code : normalize enum names (#5697)Georgi Gerganov
2024-02-24IQ3_S: a much better alternative to Q3_K (#5676)Kawrakow
2024-02-22Add docs for llama_chat_apply_template (#5645)Xuan Son Nguyen
2024-02-21IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)Kawrakow
2024-02-19llama : add llama_chat_apply_template() (#5538)Xuan Son Nguyen