index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
common
/
common.cpp
Age
Commit message (
Expand
)
Author
2024-04-24
Server: fix seed for multiple slots (#6835)
Johannes Gäßler
2024-04-21
llama : add option to render special/control tokens (#6807)
Georgi Gerganov
2024-04-20
common : try to fix Android CI (#6780)
Georgi Gerganov
2024-04-16
ggml : add llamafile sgemm (#6414)
Justine Tunney
2024-04-15
`main`: add --json-schema / -j flag (#6659)
Olivier Chafik
2024-04-11
eval-callback: Example how to use eval callback for debugging (#6576)
Pierrick Hymbert
2024-04-09
BERT tokenizer fixes (#6498)
Jared Van Bortel
2024-04-08
llama : save and restore kv cache for single seq id (#6341)
Jan Boon
2024-04-04
common: remove duplicate check for curl (#6471)
Daniel Bevenius
2024-03-27
common : change --no-penalize-nl to --penalize-nl (#6334)
Sigbjørn Skjæret
2024-03-26
cuda : rename build flag to LLAMA_CUDA (#6299)
slaren
2024-03-25
examples : add "retrieval" (#6193)
Minsoo Cheong
2024-03-23
common: llama_load_model_from_url split support (#6192)
Pierrick Hymbert
2024-03-23
lookup: complement data from context with general text statistics (#5479)
Johannes Gäßler
2024-03-22
common : default --hf-file to --model (#6234)
Georgi Gerganov
2024-03-22
common : add HF arg helpers (#6234)
Georgi Gerganov
2024-03-22
metal : pad n_ctx by 32 (#6177)
Georgi Gerganov
2024-03-22
Fix params underscore convert to dash. (#6203)
DAN™
2024-03-21
Add ability to use Q5_0, Q5_1, and IQ4_NL for quantized K cache (#6183)
Kawrakow
2024-03-19
common : print usage on '-h' and '--help' (#6145)
DAN™
2024-03-18
common : tidy-up argument parsing (#6105)
DAN™
2024-03-17
common: llama_load_model_from_url using --model-url (#6098)
Pierrick Hymbert
2024-03-16
common : refactor nested if causing error C1061 on MSVC (#6101)
DAN™
2024-03-15
llama : add support for control vectors (#5970)
Theia Vogel
2024-03-14
embedding : print cosine similarity (#899)
Georgi Gerganov
2024-03-13
llama : add pipeline parallelism support (#6017)
slaren
2024-03-11
llama : more consistent names of count variables (#5994)
Georgi Gerganov
2024-03-09
server : normalize embeddings (#5956)
SeungWon Jeong
2024-03-08
llama : support Mamba Selective State Space Models (#5328)
compilade
2024-03-04
llama : fix embeddings (#5796)
Georgi Gerganov
2024-03-04
speculative : implement stochastic speculative sampling (#5625)
Minsoo Cheong
2024-03-03
llama : allow for user specified embedding pooling type (#5849)
Douglas Hanley
2024-03-02
Support multiple GPUs (split mode) on SYCL backend (#5806)
Neo Zhang Jianyu
2024-03-01
common : fix flag `--logits-all` to `--all-logits` (#5805)
Miwa / Ensan
2024-03-01
llama : cleanup unused mmq flags (#5772)
Pierrick Hymbert
2024-02-27
llama : fix defrag bugs + add parameter (#5735)
Georgi Gerganov
2024-02-25
code : normalize enum names (#5697)
Georgi Gerganov
2024-02-18
common, server : surface min_keep as its own parameter (#5567)
Robey Holderith
2024-02-18
common : fix ub (#5530)
Georgi Gerganov
2024-02-18
ggml, common, examples, tests : fixed type arguments in printf (#5528)
Herman Semenov
2024-02-16
server : add "samplers" param to control the samplers order (#5494)
Alexey Parfenov
2024-02-16
ggml : add numa options (#5377)
bmwl
2024-02-11
common : use enums for sampler types (#5418)
Alexey Parfenov
2024-02-11
ggml : add mmla kernels for quantized GEMM (#4966)
snadampal
2024-02-07
Basic Vulkan Multi-GPU implementation (#5321)
0cc4m
2024-02-05
common : add dynamic temperature parameters to main example cli (#5295)
l3utterfly
2024-02-03
refactor : switch to emplace_back to avoid extra object (#5291)
Michael Klimenko
2024-01-31
llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240)
Georgi Gerganov
2024-01-31
Vulkan Fixes (#5223)
0cc4m
2024-01-30
kompute : llama-bench support and ggml_cpu_has_kompute() (#5226)
Jared Van Bortel
[next]