summaryrefslogtreecommitdiff
path: root/examples
AgeCommit message (Expand)Author
2024-03-09server : clarify some items in the readme (#5957)Georgi Gerganov
2024-03-09server : normalize embeddings (#5956)SeungWon Jeong
2024-03-09server : fix passing prompt as tokens (#5955)Alexey Parfenov
2024-03-09server : simplify logic for empty prompts (#5953)Georgi Gerganov
2024-03-09Server: reorganize some http logic (#5939)Xuan Son Nguyen
2024-03-09server : add SSL support (#5926)Gabe Goodhart
2024-03-09server: tests: add truncated prompt tests, better kv cache size (#5933)Pierrick Hymbert
2024-03-08llama : support Mamba Selective State Space Models (#5328)compilade
2024-03-08server: metrics: add llamacpp:prompt_seconds_total and llamacpp:tokens_predic...Pierrick Hymbert
2024-03-08server : fix EOS token detection with disabled cache (#5938)Georgi Gerganov
2024-03-07llama-bench : add embeddings option (#5924)Georgi Gerganov
2024-03-07server : add `/v1/completions` endpoint (#5914)Minsoo Cheong
2024-03-07server : refactor (#5882)Georgi Gerganov
2024-03-04fix speculative decoding build on windows (#5874)Jeffrey Quesnelle
2024-03-04llama : fix embeddings (#5796)Georgi Gerganov
2024-03-04speculative : implement stochastic speculative sampling (#5625)Minsoo Cheong
2024-03-04add alias for chat template (#5858)Xuan Son Nguyen
2024-03-04main : support special tokens as reverse/anti prompt (#5847)DAN™
2024-03-03server : init http requests thread pool with --parallel if set (#5836)Pierrick Hymbert
2024-03-02server: tests: passkey challenge / self-extend with context shift demo (#5832)Pierrick Hymbert
2024-03-02convert : automatically fall back to HfVocab if tokenizer.model doesn't exist...Jared Van Bortel
2024-03-02Support multiple GPUs (split mode) on SYCL backend (#5806)Neo Zhang Jianyu
2024-03-01server : remove api_like_OAI.py proxy script (#5808)Georgi Gerganov
2024-03-01llama : cleanup unused mmq flags (#5772)Pierrick Hymbert
2024-03-01server: allow to override threads server pool with --threads-http (#5794)Pierrick Hymbert
2024-03-01server : fix newlines in help (#5785)Georgi Gerganov
2024-02-29Server: normalize naming (#5779)Xuan Son Nguyen
2024-02-28server : hit Ctrl+C twice to exit (#5734)Xuan Son Nguyen
2024-02-28server : add "/chat/completions" alias for "/v1/...` (#5722)Jorge A
2024-02-27IQ4_XS: a 4.25 bpw quantization (#5747)Kawrakow
2024-02-27llama : fix defrag bugs + add parameter (#5735)Georgi Gerganov
2024-02-26fix server hangs on empty prompt (#5733)Xuan Son Nguyen
2024-02-26Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range...Kawrakow
2024-02-25server: tests - slow inference causes timeout on the CI (#5715)Pierrick Hymbert
2024-02-25server: docs - refresh and tease a little bit more the http server (#5718)Pierrick Hymbert
2024-02-25llama : refactor k-shift implementation + KV defragmentation (#5691)Georgi Gerganov
2024-02-25server : fix crash when system prompt is bigger than batch size (#5714)compilade
2024-02-25ggml-quants : provide ggml_vqtbl1q_u8 for 64bit compatibility (#5711)Radosław Gryta
2024-02-25server: logs - unified format and --log-format option (#5700)Pierrick Hymbert
2024-02-25server: concurrency fix + monitoring - add /metrics prometheus compatible end...Pierrick Hymbert
2024-02-25code : normalize enum names (#5697)Georgi Gerganov
2024-02-24server: continue to update other slots on embedding concurrent request (#5699)Pierrick Hymbert
2024-02-24IQ3_S: a much better alternative to Q3_K (#5676)Kawrakow
2024-02-24server: init functional tests (#5566)Pierrick Hymbert
2024-02-23server : add KV cache quantization options (#5684)AlpinDale
2024-02-22server : fallback to chatml, add AlphaMonarch chat template (#5628)Xuan Son Nguyen
2024-02-22server : clarify some params in the docs (#5640)Alexey Parfenov
2024-02-22Add docs for llama_chat_apply_template (#5645)Xuan Son Nguyen
2024-02-21examples : do not assume BOS when shifting context (#5622)Jared Van Bortel
2024-02-21server: health: fix race condition on slots data using tasks queue (#5634)Pierrick Hymbert