index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
Age
Commit message (
Expand
)
Author
2024-03-14
embedding : print all resulting embeddings (#899)
Georgi Gerganov
2024-03-14
embedding : print cosine similarity (#899)
Georgi Gerganov
2024-03-13
llama : add pipeline parallelism support (#6017)
slaren
2024-03-13
Server: Use multi-task for embeddings endpoint (#6001)
Xuan Son Nguyen
2024-03-11
llama : more consistent names of count variables (#5994)
Georgi Gerganov
2024-03-11
Update server docker image URLs (#5997)
Jakub N
2024-03-11
Server: format error to json (#5961)
Xuan Son Nguyen
2024-03-11
server : maintain chat completion id for streaming responses (#5988)
Minsoo Cheong
2024-03-10
android : fix utf8 decoding error (#5935)
Dean
2024-03-10
server: ci: windows build and tests (#5968)
Pierrick Hymbert
2024-03-10
llama : add support for GritLM (#5959)
DAN™
2024-03-09
server: benchmark: chat/completions scenario and other llm servers comparison...
Pierrick Hymbert
2024-03-09
server : print chat template info
Georgi Gerganov
2024-03-09
perplexity : support using multiple sequences to allow larger batch sizes (#5...
slaren
2024-03-09
server : fix metrics init (#5964)
Georgi Gerganov
2024-03-09
ggml : remove old quantization functions (#5942)
Georgi Gerganov
2024-03-09
server : clarify some items in the readme (#5957)
Georgi Gerganov
2024-03-09
server : normalize embeddings (#5956)
SeungWon Jeong
2024-03-09
server : fix passing prompt as tokens (#5955)
Alexey Parfenov
2024-03-09
server : simplify logic for empty prompts (#5953)
Georgi Gerganov
2024-03-09
Server: reorganize some http logic (#5939)
Xuan Son Nguyen
2024-03-09
server : add SSL support (#5926)
Gabe Goodhart
2024-03-09
server: tests: add truncated prompt tests, better kv cache size (#5933)
Pierrick Hymbert
2024-03-08
llama : support Mamba Selective State Space Models (#5328)
compilade
2024-03-08
server: metrics: add llamacpp:prompt_seconds_total and llamacpp:tokens_predic...
Pierrick Hymbert
2024-03-08
server : fix EOS token detection with disabled cache (#5938)
Georgi Gerganov
2024-03-07
llama-bench : add embeddings option (#5924)
Georgi Gerganov
2024-03-07
server : add `/v1/completions` endpoint (#5914)
Minsoo Cheong
2024-03-07
server : refactor (#5882)
Georgi Gerganov
2024-03-04
fix speculative decoding build on windows (#5874)
Jeffrey Quesnelle
2024-03-04
llama : fix embeddings (#5796)
Georgi Gerganov
2024-03-04
speculative : implement stochastic speculative sampling (#5625)
Minsoo Cheong
2024-03-04
add alias for chat template (#5858)
Xuan Son Nguyen
2024-03-04
main : support special tokens as reverse/anti prompt (#5847)
DAN™
2024-03-03
server : init http requests thread pool with --parallel if set (#5836)
Pierrick Hymbert
2024-03-02
server: tests: passkey challenge / self-extend with context shift demo (#5832)
Pierrick Hymbert
2024-03-02
convert : automatically fall back to HfVocab if tokenizer.model doesn't exist...
Jared Van Bortel
2024-03-02
Support multiple GPUs (split mode) on SYCL backend (#5806)
Neo Zhang Jianyu
2024-03-01
server : remove api_like_OAI.py proxy script (#5808)
Georgi Gerganov
2024-03-01
llama : cleanup unused mmq flags (#5772)
Pierrick Hymbert
2024-03-01
server: allow to override threads server pool with --threads-http (#5794)
Pierrick Hymbert
2024-03-01
server : fix newlines in help (#5785)
Georgi Gerganov
2024-02-29
Server: normalize naming (#5779)
Xuan Son Nguyen
2024-02-28
server : hit Ctrl+C twice to exit (#5734)
Xuan Son Nguyen
2024-02-28
server : add "/chat/completions" alias for "/v1/...` (#5722)
Jorge A
2024-02-27
IQ4_XS: a 4.25 bpw quantization (#5747)
Kawrakow
2024-02-27
llama : fix defrag bugs + add parameter (#5735)
Georgi Gerganov
2024-02-26
fix server hangs on empty prompt (#5733)
Xuan Son Nguyen
2024-02-26
Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range...
Kawrakow
2024-02-25
server: tests - slow inference causes timeout on the CI (#5715)
Pierrick Hymbert
[next]