index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
common
/
common.cpp
Age
Commit message (
Expand
)
Author
2025-02-10
Load all MoE experts during warmup and make warmup 1 token (#198)
saood06
2025-02-09
Add optional MLA (#188)
Kawrakow
2024-12-17
Be able to repack tensors at run time (#147)
Kawrakow
2024-10-02
Adding Q6_0 (#77)
Kawrakow
2024-09-05
Zen4 Flash Attention - bf16 support (#38)
Kawrakow
2024-09-02
Do not process prompts containing binary data for escapes (#33)
Kawrakow
2024-08-12
Merge mainline - Aug 12 2024 (#17)
Kawrakow
2024-07-27
Merge mainline llama.cpp (#3)
Kawrakow
2024-06-26
imatrix: be able to specify the name of the output tensor
Iwan Kawrakow
2024-06-21
llama : allow pooled embeddings on any model (#7477)
Douglas Hanley
2024-06-20
common: fix warning (#8036)
Johannes Gäßler
2024-06-15
Add `cvector-generator` example (#7514)
Xuan Son Nguyen
2024-06-08
url: save -mu downloads to new cache location (#7826)
Olivier Chafik
2024-06-08
server : smart slot selection using Longest Common Prefix (#7728)
sasha0552
2024-06-06
server : fix --threads-http arg (#7801)
Georgi Gerganov
2024-06-06
imatrix : migrate to gpt_params (#7771)
Georgi Gerganov
2024-06-04
common : refactor cli arg parsing (#7675)
Georgi Gerganov
2024-06-04
ggml : remove OpenCL (#7735)
Georgi Gerganov
2024-06-03
Vulkan Mixture of Experts (MoE) support (#7628)
0cc4m
2024-05-27
main: replace --no-special with --special (#7534)
Brian
2024-05-25
main : don't print special tokens with --grammar (#6923)
Justine Tunney
2024-05-25
ggml: aarch64: SVE kernels for q8_0_q8_0, q4_0_q8_0 vector dot (#7433)
Masaya, Kato
2024-05-25
fix missing slash in `fs_get_cache_directory()` (#7503)
Xuan Son Nguyen
2024-05-22
common : normalize naming style (#7462)
Georgi Gerganov
2024-05-21
examples: cache hf model when --model not provided (#7353)
Amir
2024-05-17
ggml-quants, llama : removed excess checks (#7274)
Herman Semenov
2024-05-14
ggml : add RPC backend (#6829)
Radoslav Gerganov
2024-05-10
Fix memory bug in grammar parser (#7194)
Justine Tunney
2024-05-10
Main+: optionally allow special tokens from user in interactive mode (#7097)
HanishKVC
2024-05-08
JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143)
Johannes Gäßler
2024-05-08
main : add --conversation / -cnv flag (#7108)
Dawid Potocki
2024-05-04
Fix Linux /sys cpu path to guess number of cores (#7064)
viric
2024-04-30
ggml : add Flash Attention (#5021)
Georgi Gerganov
2024-04-30
Improve usability of --model-url & related flags (#6930)
Olivier Chafik
2024-04-29
llava-cli : multiple images (#6969)
cpumaxx
2024-04-29
llama : fix BPE pre-tokenization (#6920)
Georgi Gerganov
2024-04-26
quantize: add imatrix and dataset metadata in GGUF (#6658)
Pierrick Hymbert
2024-04-26
add basic tensor data validation function (#6884)
slaren
2024-04-24
common : revert showing control tokens by default for server (#6860)
Kyle Mistele
2024-04-24
Server: fix seed for multiple slots (#6835)
Johannes Gäßler
2024-04-21
llama : add option to render special/control tokens (#6807)
Georgi Gerganov
2024-04-20
common : try to fix Android CI (#6780)
Georgi Gerganov
2024-04-16
ggml : add llamafile sgemm (#6414)
Justine Tunney
2024-04-15
`main`: add --json-schema / -j flag (#6659)
Olivier Chafik
2024-04-11
eval-callback: Example how to use eval callback for debugging (#6576)
Pierrick Hymbert
2024-04-09
BERT tokenizer fixes (#6498)
Jared Van Bortel
2024-04-08
llama : save and restore kv cache for single seq id (#6341)
Jan Boon
2024-04-04
common: remove duplicate check for curl (#6471)
Daniel Bevenius
2024-03-27
common : change --no-penalize-nl to --penalize-nl (#6334)
Sigbjørn Skjæret
2024-03-26
cuda : rename build flag to LLAMA_CUDA (#6299)
slaren
[next]