index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
common
/
common.h
Age
Commit message (
Expand
)
Author
2025-02-09
Add optional MLA (#188)
Kawrakow
2024-12-17
Be able to repack tensors at run time (#147)
Kawrakow
2024-09-02
Do not process prompts containing binary data for escapes (#33)
Kawrakow
2024-08-12
Merge mainline - Aug 12 2024 (#17)
Kawrakow
2024-07-27
Merge mainline llama.cpp (#3)
Kawrakow
2024-06-26
imatrix: be able to specify the name of the output tensor
Iwan Kawrakow
2024-06-18
chore: clean useless beam search param (#7985)
Frank Mai
2024-06-15
Add `cvector-generator` example (#7514)
Xuan Son Nguyen
2024-06-08
url: save -mu downloads to new cache location (#7826)
Olivier Chafik
2024-06-08
server : smart slot selection using Longest Common Prefix (#7728)
sasha0552
2024-06-06
server : fix --threads-http arg (#7801)
Georgi Gerganov
2024-06-06
imatrix : migrate to gpt_params (#7771)
Georgi Gerganov
2024-06-04
common : refactor cli arg parsing (#7675)
Georgi Gerganov
2024-05-27
main: replace --no-special with --special (#7534)
Brian
2024-05-25
main : don't print special tokens with --grammar (#6923)
Justine Tunney
2024-05-22
common : normalize naming style (#7462)
Georgi Gerganov
2024-05-21
examples: cache hf model when --model not provided (#7353)
Amir
2024-05-14
ggml : add RPC backend (#6829)
Radoslav Gerganov
2024-05-10
Main+: optionally allow special tokens from user in interactive mode (#7097)
HanishKVC
2024-05-08
main : add --conversation / -cnv flag (#7108)
Dawid Potocki
2024-04-30
perplexity: more statistics, added documentation (#6936)
Johannes Gäßler
2024-04-30
ggml : add Flash Attention (#5021)
Georgi Gerganov
2024-04-30
Improve usability of --model-url & related flags (#6930)
Olivier Chafik
2024-04-29
llava-cli : multiple images (#6969)
cpumaxx
2024-04-29
llama : fix BPE pre-tokenization (#6920)
Georgi Gerganov
2024-04-26
quantize: add imatrix and dataset metadata in GGUF (#6658)
Pierrick Hymbert
2024-04-26
add basic tensor data validation function (#6884)
slaren
2024-04-24
llama : add llama_get_pooling_type function (#6862)
Douglas Hanley
2024-04-24
common : revert showing control tokens by default for server (#6860)
Kyle Mistele
2024-04-16
ggml : add llamafile sgemm (#6414)
Justine Tunney
2024-04-11
eval-callback: Example how to use eval callback for debugging (#6576)
Pierrick Hymbert
2024-04-09
BERT tokenizer fixes (#6498)
Jared Van Bortel
2024-04-08
llama : save and restore kv cache for single seq id (#6341)
Jan Boon
2024-03-25
examples : add "retrieval" (#6193)
Minsoo Cheong
2024-03-23
common: llama_load_model_from_url split support (#6192)
Pierrick Hymbert
2024-03-23
lookup: complement data from context with general text statistics (#5479)
Johannes Gäßler
2024-03-22
common : add HF arg helpers (#6234)
Georgi Gerganov
2024-03-22
server : enable continuous batching by default (#6231)
Georgi Gerganov
2024-03-17
common: llama_load_model_from_url using --model-url (#6098)
Pierrick Hymbert
2024-03-15
llama : add support for control vectors (#5970)
Theia Vogel
2024-03-14
embedding : print cosine similarity (#899)
Georgi Gerganov
2024-03-13
llama : add pipeline parallelism support (#6017)
slaren
2024-03-09
server : normalize embeddings (#5956)
SeungWon Jeong
2024-03-04
speculative : implement stochastic speculative sampling (#5625)
Minsoo Cheong
2024-03-04
common : use LLAMA_DEFAULT_SEED (#5855)
DAN™
2024-03-03
llama : allow for user specified embedding pooling type (#5849)
Douglas Hanley
2024-03-01
llama : cleanup unused mmq flags (#5772)
Pierrick Hymbert
2024-02-27
llama : fix defrag bugs + add parameter (#5735)
Georgi Gerganov
2024-02-25
code : normalize enum names (#5697)
Georgi Gerganov
2024-02-16
server : add "samplers" param to control the samplers order (#5494)
Alexey Parfenov
[next]