index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
common
Age
Commit message (
Expand
)
Author
2024-02-18
sampling : do not set min_keep to n_probs (#5564)
Georgi Gerganov
2024-02-18
common : fix ub (#5530)
Georgi Gerganov
2024-02-18
ggml, common, examples, tests : fixed type arguments in printf (#5528)
Herman Semenov
2024-02-16
server : add "samplers" param to control the samplers order (#5494)
Alexey Parfenov
2024-02-16
ggml : add numa options (#5377)
bmwl
2024-02-11
common : use enums for sampler types (#5418)
Alexey Parfenov
2024-02-11
common : fix compile warning
Georgi Gerganov
2024-02-11
ggml : add mmla kernels for quantized GEMM (#4966)
snadampal
2024-02-08
sampling: fix top_k <= 0 (#5388)
Johannes Gäßler
2024-02-07
Basic Vulkan Multi-GPU implementation (#5321)
0cc4m
2024-02-05
common : add dynamic temperature parameters to main example cli (#5295)
l3utterfly
2024-02-03
refactor : switch to emplace_back to avoid extra object (#5291)
Michael Klimenko
2024-02-03
YaRN : store rope scaling type as int32_t in memory (#5285)
Jared Van Bortel
2024-01-31
llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240)
Georgi Gerganov
2024-01-31
Vulkan Fixes (#5223)
0cc4m
2024-01-30
kompute : llama-bench support and ggml_cpu_has_kompute() (#5226)
Jared Van Bortel
2024-01-28
ggml : add unified SYCL backend for Intel GPUs (#2690)
Abhilash Majumder
2024-01-27
Remove unused data and add fixes (#5154)
Michael Klimenko
2024-01-25
llama : dynamic temperature sampling (#4972)
l3utterfly
2024-01-23
minor : clean-up some warnings and style (#5094)
Georgi Gerganov
2024-01-22
KL-divergence (#5076)
Kawrakow
2024-01-21
Add ability to evauate multiple choice tasks (#5047)
Kawrakow
2024-01-18
Add Winogrande evaluation (#5015)
Kawrakow
2024-01-17
llama : fix copy/paste error in llama_sampling_params comment (#4994)
David Renshaw
2024-01-16
speculative : threading options (#4959)
stduhpf
2024-01-15
llama : apply classifier-free guidance to logits directly (#4951)
David Friehs
2024-01-13
main : add parameter --no-display-prompt (#4541)
Yann Follet
2024-01-12
llama : ggml-backend integration (#4766)
slaren
2024-01-12
common : streamline the formatting of help (#4890)
howlger
2024-01-12
llama : fix llm_build_k_shift to use correct n_rot (#4889)
Georgi Gerganov
2024-01-11
main : better name for variable n_print (#4874)
Georgi Gerganov
2024-01-11
main : disable token count by default (#4874)
Georgi Gerganov
2024-01-11
main : print total token count and tokens consumed so far (#4874)
pudepiedj
2024-01-08
common : fix the short form of `--grp-attn-w`, not `-gat` (#4825)
howlger
2024-01-08
main : add self-extend support (#4815)
Georgi Gerganov
2024-01-03
train : fix typo in overlapping-samples help msg (#4758)
Daniel Bevenius
2023-12-30
ggml : add ggml_cpu_has_avx_vnni() (#4589)
automaticcat
2023-12-29
cmake : fix ld warning duplicate libraries libllama.a (#4671)
Cuong Trinh Manh
2023-12-23
server : allow to specify custom prompt for penalty calculation (#3727)
Alexey Parfenov
2023-12-23
grammar : check the full vocab only if necessary (opt) (#4306)
kalomaze
2023-12-22
lookup : add prompt lookup decoding example (#4484)
LeonEricsson
2023-12-21
common : remove incorrect --model-draft default (#4568)
Jared Van Bortel
2023-12-14
ggml : remove n_dims from ggml_tensor (#4469)
slaren
2023-12-13
common : add `--version` option to show build info in CLI (#4433)
Siwen Yu
2023-12-12
english : use `typos` to fix comments and logs (#4354)
Richard Kiss
2023-12-07
llama : per-layer KV cache + quantum K cache (#4309)
Georgi Gerganov
2023-12-06
common : fix compile warning
Georgi Gerganov
2023-12-05
llama : allow overriding GGUF metadata when loading model (#4092)
Kerfuffle
2023-12-05
sampling : custom samplers order (#4285)
MaggotHATE
2023-12-04
grammar-parser : fix typo (#4318)
Ikko Eltociear Ashimine
[next]