index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
common
/
common.cpp
Age
Commit message (
Expand
)
Author
2024-02-03
refactor : switch to emplace_back to avoid extra object (#5291)
Michael Klimenko
2024-01-31
llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240)
Georgi Gerganov
2024-01-31
Vulkan Fixes (#5223)
0cc4m
2024-01-30
kompute : llama-bench support and ggml_cpu_has_kompute() (#5226)
Jared Van Bortel
2024-01-28
ggml : add unified SYCL backend for Intel GPUs (#2690)
Abhilash Majumder
2024-01-23
minor : clean-up some warnings and style (#5094)
Georgi Gerganov
2024-01-22
KL-divergence (#5076)
Kawrakow
2024-01-21
Add ability to evauate multiple choice tasks (#5047)
Kawrakow
2024-01-18
Add Winogrande evaluation (#5015)
Kawrakow
2024-01-16
speculative : threading options (#4959)
stduhpf
2024-01-13
main : add parameter --no-display-prompt (#4541)
Yann Follet
2024-01-12
llama : ggml-backend integration (#4766)
slaren
2024-01-12
common : streamline the formatting of help (#4890)
howlger
2024-01-12
llama : fix llm_build_k_shift to use correct n_rot (#4889)
Georgi Gerganov
2024-01-11
main : better name for variable n_print (#4874)
Georgi Gerganov
2024-01-11
main : disable token count by default (#4874)
Georgi Gerganov
2024-01-11
main : print total token count and tokens consumed so far (#4874)
pudepiedj
2024-01-08
common : fix the short form of `--grp-attn-w`, not `-gat` (#4825)
howlger
2024-01-08
main : add self-extend support (#4815)
Georgi Gerganov
2023-12-30
ggml : add ggml_cpu_has_avx_vnni() (#4589)
automaticcat
2023-12-21
common : remove incorrect --model-draft default (#4568)
Jared Van Bortel
2023-12-13
common : add `--version` option to show build info in CLI (#4433)
Siwen Yu
2023-12-07
llama : per-layer KV cache + quantum K cache (#4309)
Georgi Gerganov
2023-12-05
llama : allow overriding GGUF metadata when loading model (#4092)
Kerfuffle
2023-12-05
sampling : custom samplers order (#4285)
MaggotHATE
2023-11-23
llama : KV cache view API + better KV cache management (#4170)
Georgi Gerganov
2023-11-20
main : Add ChatML functionality to main example (#4046)
Seb C
2023-11-19
common : comma should be semicolon (#4137)
kchro3
2023-11-17
common : improve yaml log escaping (#4080)
Jannis Schönleber
2023-11-16
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
Kerfuffle
2023-11-05
ggml-cuda : fix f16 mul mat (#3961)
slaren
2023-11-05
Allow common process_escapes to handle \x sequences (#3928)
Kerfuffle
2023-11-03
speculative : change default p_accept to 0.5 + CLI args (#3919)
Georgi Gerganov
2023-11-02
build : link against build info instead of compiling against it (#3879)
cebtenzzre
2023-11-01
llama : implement YaRN RoPE scaling (#2268)
cebtenzzre
2023-11-01
common : minor (#3715)
Georgi Gerganov
2023-11-01
common : allow caller to handle help/argument exceptions (#3715)
bandoti
2023-10-31
samplers : Min-P sampler implementation [alternative to Top P/Top K] (#3841)
kalomaze
2023-10-29
Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)
Kerfuffle
2023-10-28
llama : add option for greedy sampling with probs (#3813)
Georgi Gerganov
2023-10-28
common : print that one line of the syntax help *also* to standard output (#3...
Henk Poley
2023-10-23
llama : remove token functions with `context` args in favor of `model` (#3720)
Marcus Dunn
2023-10-22
main : escape prompt for cfg_negative_prompt and consecutive inputs in main w...
vvhg1
2023-10-20
sampling : refactor init to use llama_sampling_params (#3696)
Georgi Gerganov
2023-10-18
speculative : add tree-based sampling example (#3624)
Georgi Gerganov
2023-10-17
tokenizer : special token handling (#3538)
staviq
2023-10-12
examples: support LLaVA v1.5 (multimodal model) (#3436)
M. Yusuf Sarıgöz
2023-10-11
common : fix mirostat state when using multiple sequences (#3543)
Kerfuffle
2023-10-07
Fix trying to strip newline from empty prompt and cfg prompt file content (#3...
Kerfuffle
2023-10-06
parallel : add option to load external prompt file (#3416)
pudepiedj
[next]