summaryrefslogtreecommitdiff
path: root/common/common.cpp
AgeCommit message (Expand)Author
2024-02-18common : fix ub (#5530)Georgi Gerganov
2024-02-18ggml, common, examples, tests : fixed type arguments in printf (#5528)Herman Semenov
2024-02-16server : add "samplers" param to control the samplers order (#5494)Alexey Parfenov
2024-02-16ggml : add numa options (#5377)bmwl
2024-02-11common : use enums for sampler types (#5418)Alexey Parfenov
2024-02-11ggml : add mmla kernels for quantized GEMM (#4966)snadampal
2024-02-07Basic Vulkan Multi-GPU implementation (#5321)0cc4m
2024-02-05common : add dynamic temperature parameters to main example cli (#5295)l3utterfly
2024-02-03refactor : switch to emplace_back to avoid extra object (#5291)Michael Klimenko
2024-01-31llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240)Georgi Gerganov
2024-01-31Vulkan Fixes (#5223)0cc4m
2024-01-30kompute : llama-bench support and ggml_cpu_has_kompute() (#5226)Jared Van Bortel
2024-01-28ggml : add unified SYCL backend for Intel GPUs (#2690)Abhilash Majumder
2024-01-23minor : clean-up some warnings and style (#5094)Georgi Gerganov
2024-01-22KL-divergence (#5076)Kawrakow
2024-01-21Add ability to evauate multiple choice tasks (#5047)Kawrakow
2024-01-18Add Winogrande evaluation (#5015)Kawrakow
2024-01-16speculative : threading options (#4959)stduhpf
2024-01-13main : add parameter --no-display-prompt (#4541)Yann Follet
2024-01-12llama : ggml-backend integration (#4766)slaren
2024-01-12common : streamline the formatting of help (#4890)howlger
2024-01-12llama : fix llm_build_k_shift to use correct n_rot (#4889)Georgi Gerganov
2024-01-11main : better name for variable n_print (#4874)Georgi Gerganov
2024-01-11main : disable token count by default (#4874)Georgi Gerganov
2024-01-11main : print total token count and tokens consumed so far (#4874)pudepiedj
2024-01-08common : fix the short form of `--grp-attn-w`, not `-gat` (#4825)howlger
2024-01-08main : add self-extend support (#4815)Georgi Gerganov
2023-12-30ggml : add ggml_cpu_has_avx_vnni() (#4589)automaticcat
2023-12-21common : remove incorrect --model-draft default (#4568)Jared Van Bortel
2023-12-13common : add `--version` option to show build info in CLI (#4433)Siwen Yu
2023-12-07llama : per-layer KV cache + quantum K cache (#4309)Georgi Gerganov
2023-12-05llama : allow overriding GGUF metadata when loading model (#4092)Kerfuffle
2023-12-05sampling : custom samplers order (#4285)MaggotHATE
2023-11-23llama : KV cache view API + better KV cache management (#4170)Georgi Gerganov
2023-11-20main : Add ChatML functionality to main example (#4046)Seb C
2023-11-19common : comma should be semicolon (#4137)kchro3
2023-11-17common : improve yaml log escaping (#4080)Jannis Schönleber
2023-11-16Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)Kerfuffle
2023-11-05ggml-cuda : fix f16 mul mat (#3961)slaren
2023-11-05Allow common process_escapes to handle \x sequences (#3928)Kerfuffle
2023-11-03speculative : change default p_accept to 0.5 + CLI args (#3919)Georgi Gerganov
2023-11-02build : link against build info instead of compiling against it (#3879)cebtenzzre
2023-11-01llama : implement YaRN RoPE scaling (#2268)cebtenzzre
2023-11-01common : minor (#3715)Georgi Gerganov
2023-11-01common : allow caller to handle help/argument exceptions (#3715)bandoti
2023-10-31samplers : Min-P sampler implementation [alternative to Top P/Top K] (#3841)kalomaze
2023-10-29Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)Kerfuffle
2023-10-28llama : add option for greedy sampling with probs (#3813)Georgi Gerganov
2023-10-28common : print that one line of the syntax help *also* to standard output (#3...Henk Poley
2023-10-23llama : remove token functions with `context` args in favor of `model` (#3720)Marcus Dunn