summaryrefslogtreecommitdiff
path: root/common/common.cpp
AgeCommit message (Expand)Author
2024-04-24Server: fix seed for multiple slots (#6835)Johannes Gäßler
2024-04-21llama : add option to render special/control tokens (#6807)Georgi Gerganov
2024-04-20common : try to fix Android CI (#6780)Georgi Gerganov
2024-04-16ggml : add llamafile sgemm (#6414)Justine Tunney
2024-04-15`main`: add --json-schema / -j flag (#6659)Olivier Chafik
2024-04-11eval-callback: Example how to use eval callback for debugging (#6576)Pierrick Hymbert
2024-04-09BERT tokenizer fixes (#6498)Jared Van Bortel
2024-04-08llama : save and restore kv cache for single seq id (#6341)Jan Boon
2024-04-04common: remove duplicate check for curl (#6471)Daniel Bevenius
2024-03-27common : change --no-penalize-nl to --penalize-nl (#6334)Sigbjørn Skjæret
2024-03-26cuda : rename build flag to LLAMA_CUDA (#6299)slaren
2024-03-25examples : add "retrieval" (#6193)Minsoo Cheong
2024-03-23common: llama_load_model_from_url split support (#6192)Pierrick Hymbert
2024-03-23lookup: complement data from context with general text statistics (#5479)Johannes Gäßler
2024-03-22common : default --hf-file to --model (#6234)Georgi Gerganov
2024-03-22common : add HF arg helpers (#6234)Georgi Gerganov
2024-03-22metal : pad n_ctx by 32 (#6177)Georgi Gerganov
2024-03-22Fix params underscore convert to dash. (#6203)DAN™
2024-03-21Add ability to use Q5_0, Q5_1, and IQ4_NL for quantized K cache (#6183)Kawrakow
2024-03-19common : print usage on '-h' and '--help' (#6145)DAN™
2024-03-18common : tidy-up argument parsing (#6105)DAN™
2024-03-17common: llama_load_model_from_url using --model-url (#6098)Pierrick Hymbert
2024-03-16common : refactor nested if causing error C1061 on MSVC (#6101)DAN™
2024-03-15llama : add support for control vectors (#5970)Theia Vogel
2024-03-14embedding : print cosine similarity (#899)Georgi Gerganov
2024-03-13llama : add pipeline parallelism support (#6017)slaren
2024-03-11llama : more consistent names of count variables (#5994)Georgi Gerganov
2024-03-09server : normalize embeddings (#5956)SeungWon Jeong
2024-03-08llama : support Mamba Selective State Space Models (#5328)compilade
2024-03-04llama : fix embeddings (#5796)Georgi Gerganov
2024-03-04speculative : implement stochastic speculative sampling (#5625)Minsoo Cheong
2024-03-03llama : allow for user specified embedding pooling type (#5849)Douglas Hanley
2024-03-02Support multiple GPUs (split mode) on SYCL backend (#5806)Neo Zhang Jianyu
2024-03-01common : fix flag `--logits-all` to `--all-logits` (#5805)Miwa / Ensan
2024-03-01llama : cleanup unused mmq flags (#5772)Pierrick Hymbert
2024-02-27llama : fix defrag bugs + add parameter (#5735)Georgi Gerganov
2024-02-25code : normalize enum names (#5697)Georgi Gerganov
2024-02-18common, server : surface min_keep as its own parameter (#5567)Robey Holderith
2024-02-18common : fix ub (#5530)Georgi Gerganov
2024-02-18ggml, common, examples, tests : fixed type arguments in printf (#5528)Herman Semenov
2024-02-16server : add "samplers" param to control the samplers order (#5494)Alexey Parfenov
2024-02-16ggml : add numa options (#5377)bmwl
2024-02-11common : use enums for sampler types (#5418)Alexey Parfenov
2024-02-11ggml : add mmla kernels for quantized GEMM (#4966)snadampal
2024-02-07Basic Vulkan Multi-GPU implementation (#5321)0cc4m
2024-02-05common : add dynamic temperature parameters to main example cli (#5295)l3utterfly
2024-02-03refactor : switch to emplace_back to avoid extra object (#5291)Michael Klimenko
2024-01-31llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240)Georgi Gerganov
2024-01-31Vulkan Fixes (#5223)0cc4m
2024-01-30kompute : llama-bench support and ggml_cpu_has_kompute() (#5226)Jared Van Bortel