index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
common
/
common.h
Age
Commit message (
Expand
)
Author
2024-05-27
main: replace --no-special with --special (#7534)
Brian
2024-05-25
main : don't print special tokens with --grammar (#6923)
Justine Tunney
2024-05-22
common : normalize naming style (#7462)
Georgi Gerganov
2024-05-21
examples: cache hf model when --model not provided (#7353)
Amir
2024-05-14
ggml : add RPC backend (#6829)
Radoslav Gerganov
2024-05-10
Main+: optionally allow special tokens from user in interactive mode (#7097)
HanishKVC
2024-05-08
main : add --conversation / -cnv flag (#7108)
Dawid Potocki
2024-04-30
perplexity: more statistics, added documentation (#6936)
Johannes Gäßler
2024-04-30
ggml : add Flash Attention (#5021)
Georgi Gerganov
2024-04-30
Improve usability of --model-url & related flags (#6930)
Olivier Chafik
2024-04-29
llava-cli : multiple images (#6969)
cpumaxx
2024-04-29
llama : fix BPE pre-tokenization (#6920)
Georgi Gerganov
2024-04-26
quantize: add imatrix and dataset metadata in GGUF (#6658)
Pierrick Hymbert
2024-04-26
add basic tensor data validation function (#6884)
slaren
2024-04-24
llama : add llama_get_pooling_type function (#6862)
Douglas Hanley
2024-04-24
common : revert showing control tokens by default for server (#6860)
Kyle Mistele
2024-04-16
ggml : add llamafile sgemm (#6414)
Justine Tunney
2024-04-11
eval-callback: Example how to use eval callback for debugging (#6576)
Pierrick Hymbert
2024-04-09
BERT tokenizer fixes (#6498)
Jared Van Bortel
2024-04-08
llama : save and restore kv cache for single seq id (#6341)
Jan Boon
2024-03-25
examples : add "retrieval" (#6193)
Minsoo Cheong
2024-03-23
common: llama_load_model_from_url split support (#6192)
Pierrick Hymbert
2024-03-23
lookup: complement data from context with general text statistics (#5479)
Johannes Gäßler
2024-03-22
common : add HF arg helpers (#6234)
Georgi Gerganov
2024-03-22
server : enable continuous batching by default (#6231)
Georgi Gerganov
2024-03-17
common: llama_load_model_from_url using --model-url (#6098)
Pierrick Hymbert
2024-03-15
llama : add support for control vectors (#5970)
Theia Vogel
2024-03-14
embedding : print cosine similarity (#899)
Georgi Gerganov
2024-03-13
llama : add pipeline parallelism support (#6017)
slaren
2024-03-09
server : normalize embeddings (#5956)
SeungWon Jeong
2024-03-04
speculative : implement stochastic speculative sampling (#5625)
Minsoo Cheong
2024-03-04
common : use LLAMA_DEFAULT_SEED (#5855)
DAN™
2024-03-03
llama : allow for user specified embedding pooling type (#5849)
Douglas Hanley
2024-03-01
llama : cleanup unused mmq flags (#5772)
Pierrick Hymbert
2024-02-27
llama : fix defrag bugs + add parameter (#5735)
Georgi Gerganov
2024-02-25
code : normalize enum names (#5697)
Georgi Gerganov
2024-02-16
server : add "samplers" param to control the samplers order (#5494)
Alexey Parfenov
2024-02-16
ggml : add numa options (#5377)
bmwl
2024-02-11
common : use enums for sampler types (#5418)
Alexey Parfenov
2024-02-03
YaRN : store rope scaling type as int32_t in memory (#5285)
Jared Van Bortel
2024-01-31
llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240)
Georgi Gerganov
2024-01-22
KL-divergence (#5076)
Kawrakow
2024-01-21
Add ability to evauate multiple choice tasks (#5047)
Kawrakow
2024-01-18
Add Winogrande evaluation (#5015)
Kawrakow
2024-01-16
speculative : threading options (#4959)
stduhpf
2024-01-13
main : add parameter --no-display-prompt (#4541)
Yann Follet
2024-01-12
llama : ggml-backend integration (#4766)
slaren
2024-01-11
main : better name for variable n_print (#4874)
Georgi Gerganov
2024-01-11
main : disable token count by default (#4874)
Georgi Gerganov
2024-01-11
main : print total token count and tokens consumed so far (#4874)
pudepiedj
[next]