summaryrefslogtreecommitdiff
path: root/examples
AgeCommit message (Expand)Author
2024-04-11eval-callback: Example how to use eval callback for debugging (#6576)Pierrick Hymbert
2024-04-10gguf : add option to not check tensor data (#6582)Daniel Bevenius
2024-04-10minor layout improvements (#6572)Ralph Soika
2024-04-09BERT tokenizer fixes (#6498)Jared Van Bortel
2024-04-09server : detect search query to start webchat (#6554)Ed Lee
2024-04-08llama : save and restore kv cache for single seq id (#6341)Jan Boon
2024-04-06ci: bench: support sse and fix prompt processing time / server: add tokens us...Pierrick Hymbert
2024-04-05bench : make n_batch and n_ubatch configurable in Batched bench (#6500)Ting Sun
2024-04-04server: allow penalizing repetition of newlines on server webpage (#6431)Shakhar Dasgupta
2024-04-04ci: bench: add more ftype, fix triggers and bot comment (#6466)Pierrick Hymbert
2024-04-04examples : add GBNF validator program (#5948)Clint Herron
2024-04-04server : remove obsolete --memory-f32 optionGeorgi Gerganov
2024-04-04server : add option to disable KV offload (#6468)Xiao-Yong Jin
2024-04-03A few small fixes to server's README docs (#6428)Fattire
2024-04-03server : handle exception on wrong type in request (#6452)JH23X
2024-04-03ggml : mul_mat_id use the same tensor for all the experts (#6387)slaren
2024-03-29split: allow --split-max-size option (#6343)Xuan Son Nguyen
2024-03-28llava : fix MobileVLM (#6364)Ziang Wu
2024-03-28doc: fix outdated default value of batch size (#6336)Ting Sun
2024-03-28server : stop gracefully on SIGTERM (#6348)Eric Zhang
2024-03-28doc: fix typo in MobileVLM-README.md (#6181)Ziang Wu
2024-03-27server: continuous performance monitoring and PR comment (#6283)Pierrick Hymbert
2024-03-27embedding : show full embedding for single prompt (#6342)howlger
2024-03-27llama2c : open file as binary (#6332)Georgi Gerganov
2024-03-27server: public: use relative routes for static files (#6325)Eric Zhang
2024-03-26llama : greatly reduce output buffer memory usage (#6122)compilade
2024-03-26IQ1_M: 1.75 bpw quantization (#6302)Kawrakow
2024-03-26quantize : be able to override metadata by key (#6321)Kawrakow
2024-03-26embedding : adjust `n_ubatch` value (#6296)Minsoo Cheong
2024-03-26server : add `n_discard` parameter (#6300)Jan Boon
2024-03-26cuda : rename build flag to LLAMA_CUDA (#6299)slaren
2024-03-25Server: clean up OAI params parsing function (#6284)Xuan Son Nguyen
2024-03-25[SYCL] fix SYCL backend build on windows is break by LOG() error (#6290)Neo Zhang Jianyu
2024-03-25examples : add "retrieval" (#6193)Minsoo Cheong
2024-03-24imatrix : fix wname for mul_mat_id ops (#6271)Georgi Gerganov
2024-03-24sampling : deduplicated code for probability distribution access (#6240)Minsoo Cheong
2024-03-23common: llama_load_model_from_url split support (#6192)Pierrick Hymbert
2024-03-23server: docs: `--threads` and `--threads`, `--ubatch-size`, `--log-disable` (...Pierrick Hymbert
2024-03-23server: flush stdout after logging in both text and json layout (#6253)Pierrick Hymbert
2024-03-23lookup: complement data from context with general text statistics (#5479)Johannes Gäßler
2024-03-22convert-llama2c-to-ggml : enable conversion of GQA models (#6237)fraxy-v
2024-03-22quantize: options for output and token embedding tensors qtype (#6239)Kawrakow
2024-03-22llama_model_loader: support multiple split/shard GGUFs (#6187)Pierrick Hymbert
2024-03-22json-schema-to-grammar : fix order of props + non-str const/enum (#6232)Olivier Chafik
2024-03-22server : fix n_keep always showing as 0 in response (#6211)Jan Boon
2024-03-22server : enable continuous batching by default (#6231)Georgi Gerganov
2024-03-22metal : pad n_ctx by 32 (#6177)Georgi Gerganov
2024-03-21server : update readme doc from `slot_id` to `id_slot` (#6213)Jan Boon
2024-03-21json-schema-to-grammar improvements (+ added to server) (#5978)Olivier Chafik
2024-03-21Add ability to use Q5_0, Q5_1, and IQ4_NL for quantized K cache (#6183)Kawrakow