index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
common
Age
Commit message (
Expand
)
Author
2024-12-17
Be able to repack tensors at run time (#147)
Kawrakow
2024-10-02
Adding Q6_0 (#77)
Kawrakow
2024-09-05
Zen4 Flash Attention - bf16 support (#38)
Kawrakow
2024-09-02
Do not process prompts containing binary data for escapes (#33)
Kawrakow
2024-08-12
Merge mainline - Aug 12 2024 (#17)
Kawrakow
2024-07-27
Merge mainline llama.cpp (#3)
Kawrakow
2024-06-26
imatrix: be able to specify the name of the output tensor
Iwan Kawrakow
2024-06-21
llama : allow pooled embeddings on any model (#7477)
Douglas Hanley
2024-06-20
common: fix warning (#8036)
Johannes Gäßler
2024-06-18
chore: clean useless beam search param (#7985)
Frank Mai
2024-06-15
Add `cvector-generator` example (#7514)
Xuan Son Nguyen
2024-06-11
json: refine constraint for whitespace to avoid runaways yet allow pretty pri...
Olivier Chafik
2024-06-11
`json`: document schema conversion in GBNF readme, align manual grammar examp...
Olivier Chafik
2024-06-08
url: save -mu downloads to new cache location (#7826)
Olivier Chafik
2024-06-08
server : smart slot selection using Longest Common Prefix (#7728)
sasha0552
2024-06-07
cmake : fix BUILD_SHARED_LIBS=ON build (#7784)
intelmatt
2024-06-06
server : fix --threads-http arg (#7801)
Georgi Gerganov
2024-06-06
imatrix : migrate to gpt_params (#7771)
Georgi Gerganov
2024-06-06
Added support for . (any character) token in grammar engine. (#6467)
Clint Herron
2024-06-06
grammars: x{min,max} repetition operator (#6640)
Olivier Chafik
2024-06-04
common : refactor cli arg parsing (#7675)
Georgi Gerganov
2024-06-04
ggml : remove OpenCL (#7735)
Georgi Gerganov
2024-06-03
Vulkan Mixture of Experts (MoE) support (#7628)
0cc4m
2024-05-27
main: replace --no-special with --special (#7534)
Brian
2024-05-25
train : change default FA argument (#7528)
Georgi Gerganov
2024-05-25
main : don't print special tokens with --grammar (#6923)
Justine Tunney
2024-05-25
ggml: aarch64: SVE kernels for q8_0_q8_0, q4_0_q8_0 vector dot (#7433)
Masaya, Kato
2024-05-25
fix missing slash in `fs_get_cache_directory()` (#7503)
Xuan Son Nguyen
2024-05-22
common : normalize naming style (#7462)
Georgi Gerganov
2024-05-21
`grammars`: fix resampling logic regression (#7424)
Olivier Chafik
2024-05-21
examples: cache hf model when --model not provided (#7353)
Amir
2024-05-17
ggml-quants, llama : removed excess checks (#7274)
Herman Semenov
2024-05-16
grammar, json, llama: replace push on emplace if it possible (#7273)
Herman Semenov
2024-05-16
Add support for properly optimized Windows ARM64 builds with LLVM and MSVC (#...
Max Krasnyansky
2024-05-14
ggml : add RPC backend (#6829)
Radoslav Gerganov
2024-05-11
server: fix reported top tokens for temperature 0 (#7203)
Johannes Gäßler
2024-05-10
Fix memory bug in grammar parser (#7194)
Justine Tunney
2024-05-10
Main+: optionally allow special tokens from user in interactive mode (#7097)
HanishKVC
2024-05-08
JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143)
Johannes Gäßler
2024-05-08
main : add --conversation / -cnv flag (#7108)
Dawid Potocki
2024-05-07
server: fix incorrectly reported token probabilities (#7125)
Johannes Gäßler
2024-05-04
Fix Linux /sys cpu path to guess number of cores (#7064)
viric
2024-05-01
Update LOG_IMPL and LOG_TEE_IMPL (#7029)
Andrew Downing
2024-04-30
perplexity: more statistics, added documentation (#6936)
Johannes Gäßler
2024-04-30
ggml : add Flash Attention (#5021)
Georgi Gerganov
2024-04-30
Improve usability of --model-url & related flags (#6930)
Olivier Chafik
2024-04-29
llava-cli : multiple images (#6969)
cpumaxx
2024-04-29
llama : fix BPE pre-tokenization (#6920)
Georgi Gerganov
2024-04-29
sampling : use std::random_device{}() for default random seed (#6962)
David Renshaw
2024-04-27
Replace "alternative" boolean operator in conditional compilation directive (...
mgroeber9110
[next]