index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
Age
Commit message (
Expand
)
Author
2024-05-21
examples: cache hf model when --model not provided (#7353)
Amir
2024-05-21
Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) (#7425)
jaime-m-p
2024-05-20
Tokenizer SPM fixes for phi-3 and llama-spm (#7375)
jaime-m-p
2024-05-20
perplexity: update README FP16 results [no ci] (#7413)
Johannes Gäßler
2024-05-20
server : fix temperature + disable some tests (#7409)
Georgi Gerganov
2024-05-20
server : tuning tests (#7388)
Georgi Gerganov
2024-05-20
server : return error on too large embedding input (#7389)
Georgi Gerganov
2024-05-20
tests : fix --keep_split -> --keep-split (#7374)
Georgi Gerganov
2024-05-19
quantize : fix --keep-split check (#7374)
Fred Douglas
2024-05-19
server: add test for token probs (#7347)
Johannes Gäßler
2024-05-19
server: fix seed being reported back (#7382)
Johannes Gäßler
2024-05-19
cmake : update android comments (#7341)
Georgi Gerganov
2024-05-18
android : use "ci-android" branch for CI (#7341)
Georgi Gerganov
2024-05-18
server: correct --threads documentation [no ci] (#7362)
Johannes Gäßler
2024-05-18
perplexity : ndot progress and show stats with < 100 tasks (#7348)
strawberrymelonpanda
2024-05-17
rpc : set SO_REUSEADDR for the server socket (#7320)
Radoslav Gerganov
2024-05-17
server : add support for the RPC backend (#7305)
Radoslav Gerganov
2024-05-17
[Server] Added --verbose option to README [no ci] (#7335)
Leon Knauer
2024-05-16
Revert "server bench: fix bench not waiting for model load (#7284)" (#7334)
Pierrick Hymbert
2024-05-16
rpc : get available mem for the CPU backend
Radoslav Gerganov
2024-05-16
rpc : add command line arg for specifying backend memory
Radoslav Gerganov
2024-05-16
doc: add references to hugging face GGUF-my-repo quantisation web tool. (#7288)
Vaibhav Srivastav
2024-05-15
ggml : tag ggml_tensor::backend as deprecated (#7290)
slaren
2024-05-15
embedding : free the batch after execution (#7297)
dm4
2024-05-15
server bench: fix bench not waiting for model load (#7284)
Johannes Gäßler
2024-05-14
server: free sampling contexts on exit (#7264)
Steve Grubb
2024-05-14
Revert "move ndk code to a new library (#6951)" (#7282)
Brian
2024-05-14
ggml : add RPC backend (#6829)
Radoslav Gerganov
2024-05-14
move ndk code to a new library (#6951)
Elton Kola
2024-05-14
docs: Fix typo and update description for --embeddings flag (#7026)
Ryuei
2024-05-14
llava-cli: fix base64 prompt (#7248)
k.h.lai
2024-05-13
perplexity: add BF16 vs. FP16 results (#7150)
Johannes Gäßler
2024-05-13
change default temperature of OAI compat API from 0 to 1 (#7226)
Benjamin Findley
2024-05-11
fix system prompt handling (#7153)
Xuan Son Nguyen
2024-05-11
server : free llama_batch on exit (#7212)
Steve Grubb
2024-05-11
server: fix reported top tokens for temperature 0 (#7203)
Johannes Gäßler
2024-05-11
llama : add Jina Embeddings architecture (#6826)
Joan Fontanals
2024-05-10
llama-bench : add pp+tg test type (#7199)
slaren
2024-05-10
Fix memory bug in grammar parser (#7194)
Justine Tunney
2024-05-10
Main+: optionally allow special tokens from user in interactive mode (#7097)
HanishKVC
2024-05-10
llava : fix moondream support (#7163)
Andrei
2024-05-10
eval-callback : fix conversion to float (#7184)
slaren
2024-05-09
TypoFix (#7162)
Ahmet Zeer
2024-05-08
convert-hf : save memory with lazy evaluation (#7075)
compilade
2024-05-08
JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143)
Johannes Gäßler
2024-05-08
Revert "llava : add support for moondream vision language model (#6899)"
Georgi Gerganov
2024-05-08
server : add themes + favicon (#6848)
JohnnyB
2024-05-08
main : add --conversation / -cnv flag (#7108)
Dawid Potocki
2024-05-08
server : add_special option for tokenize endpoint (#7059)
Johan
2024-05-08
clean up json_value & server_log (#7142)
Xuan Son Nguyen
[prev]
[next]