index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
Age
Commit message (
Expand
)
Author
2024-04-11
eval-callback: Example how to use eval callback for debugging (#6576)
Pierrick Hymbert
2024-04-10
gguf : add option to not check tensor data (#6582)
Daniel Bevenius
2024-04-10
minor layout improvements (#6572)
Ralph Soika
2024-04-09
BERT tokenizer fixes (#6498)
Jared Van Bortel
2024-04-09
server : detect search query to start webchat (#6554)
Ed Lee
2024-04-08
llama : save and restore kv cache for single seq id (#6341)
Jan Boon
2024-04-06
ci: bench: support sse and fix prompt processing time / server: add tokens us...
Pierrick Hymbert
2024-04-05
bench : make n_batch and n_ubatch configurable in Batched bench (#6500)
Ting Sun
2024-04-04
server: allow penalizing repetition of newlines on server webpage (#6431)
Shakhar Dasgupta
2024-04-04
ci: bench: add more ftype, fix triggers and bot comment (#6466)
Pierrick Hymbert
2024-04-04
examples : add GBNF validator program (#5948)
Clint Herron
2024-04-04
server : remove obsolete --memory-f32 option
Georgi Gerganov
2024-04-04
server : add option to disable KV offload (#6468)
Xiao-Yong Jin
2024-04-03
A few small fixes to server's README docs (#6428)
Fattire
2024-04-03
server : handle exception on wrong type in request (#6452)
JH23X
2024-04-03
ggml : mul_mat_id use the same tensor for all the experts (#6387)
slaren
2024-03-29
split: allow --split-max-size option (#6343)
Xuan Son Nguyen
2024-03-28
llava : fix MobileVLM (#6364)
Ziang Wu
2024-03-28
doc: fix outdated default value of batch size (#6336)
Ting Sun
2024-03-28
server : stop gracefully on SIGTERM (#6348)
Eric Zhang
2024-03-28
doc: fix typo in MobileVLM-README.md (#6181)
Ziang Wu
2024-03-27
server: continuous performance monitoring and PR comment (#6283)
Pierrick Hymbert
2024-03-27
embedding : show full embedding for single prompt (#6342)
howlger
2024-03-27
llama2c : open file as binary (#6332)
Georgi Gerganov
2024-03-27
server: public: use relative routes for static files (#6325)
Eric Zhang
2024-03-26
llama : greatly reduce output buffer memory usage (#6122)
compilade
2024-03-26
IQ1_M: 1.75 bpw quantization (#6302)
Kawrakow
2024-03-26
quantize : be able to override metadata by key (#6321)
Kawrakow
2024-03-26
embedding : adjust `n_ubatch` value (#6296)
Minsoo Cheong
2024-03-26
server : add `n_discard` parameter (#6300)
Jan Boon
2024-03-26
cuda : rename build flag to LLAMA_CUDA (#6299)
slaren
2024-03-25
Server: clean up OAI params parsing function (#6284)
Xuan Son Nguyen
2024-03-25
[SYCL] fix SYCL backend build on windows is break by LOG() error (#6290)
Neo Zhang Jianyu
2024-03-25
examples : add "retrieval" (#6193)
Minsoo Cheong
2024-03-24
imatrix : fix wname for mul_mat_id ops (#6271)
Georgi Gerganov
2024-03-24
sampling : deduplicated code for probability distribution access (#6240)
Minsoo Cheong
2024-03-23
common: llama_load_model_from_url split support (#6192)
Pierrick Hymbert
2024-03-23
server: docs: `--threads` and `--threads`, `--ubatch-size`, `--log-disable` (...
Pierrick Hymbert
2024-03-23
server: flush stdout after logging in both text and json layout (#6253)
Pierrick Hymbert
2024-03-23
lookup: complement data from context with general text statistics (#5479)
Johannes Gäßler
2024-03-22
convert-llama2c-to-ggml : enable conversion of GQA models (#6237)
fraxy-v
2024-03-22
quantize: options for output and token embedding tensors qtype (#6239)
Kawrakow
2024-03-22
llama_model_loader: support multiple split/shard GGUFs (#6187)
Pierrick Hymbert
2024-03-22
json-schema-to-grammar : fix order of props + non-str const/enum (#6232)
Olivier Chafik
2024-03-22
server : fix n_keep always showing as 0 in response (#6211)
Jan Boon
2024-03-22
server : enable continuous batching by default (#6231)
Georgi Gerganov
2024-03-22
metal : pad n_ctx by 32 (#6177)
Georgi Gerganov
2024-03-21
server : update readme doc from `slot_id` to `id_slot` (#6213)
Jan Boon
2024-03-21
json-schema-to-grammar improvements (+ added to server) (#5978)
Olivier Chafik
2024-03-21
Add ability to use Q5_0, Q5_1, and IQ4_NL for quantized K cache (#6183)
Kawrakow
[next]