index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
Age
Commit message (
Expand
)
Author
2024-08-07
Adding IQ2_TN for use with ternary models (#13)
Kawrakow
2024-08-05
q2_K: allow it to detect ternary nets and quantize accordingly
Iwan Kawrakow
2024-08-01
iq3_k: Basics
Iwan Kawrakow
2024-08-01
iq5_k: Basics
Iwan Kawrakow
2024-08-01
iq2_k: Basics
Iwan Kawrakow
2024-07-28
IQ4_K: SOTA 4-bit quantization (#6)
Kawrakow
2024-07-27
Merge mainline llama.cpp (#3)
Kawrakow
2024-07-24
Add copyright notices
Iwan Kawrakow
2024-06-26
imatrix: be able to specify the name of the output tensor
Iwan Kawrakow
2024-06-24
Bitnet: tiny bity faster 1.625 bpw variant on Metal
Iwan Kawrakow
2024-06-22
bitnet: add 2 bpw quantization
Iwan Kawrakow
2024-06-22
bitnet: CUDA, scalar, AVX2
Iwan Kawrakow
2024-06-21
llama : allow pooled embeddings on any model (#7477)
Douglas Hanley
2024-06-21
swiftui : enable stream updating (#7754)
Shuichi Tsutsumi
2024-06-20
[SYCL] Fix windows build and inference (#8003)
luoyu-intel
2024-06-20
server : fix smart slot selection (#8020)
sasha0552
2024-06-18
Only use FIM middle token if it exists (#7648)
Sigbjørn Skjæret
2024-06-17
Add support for sqrt on CUDA (#7953)
Calvin Laurenson
2024-06-15
Add `cvector-generator` example (#7514)
Xuan Son Nguyen
2024-06-14
llama-bench : fix RPC indication (#7936)
Radoslav Gerganov
2024-06-13
move BLAS to a separate backend (#6210)
slaren
2024-06-13
`build`: rename main → llama-cli, server → llama-server, llava-cli → ll...
Olivier Chafik
2024-06-12
server : restore numeric prompts (#7883)
Georgi Gerganov
2024-06-11
llama-bench: more compact markdown tables (#7879)
Johannes Gäßler
2024-06-11
json: refine constraint for whitespace to avoid runaways yet allow pretty pri...
Olivier Chafik
2024-06-11
`json`: document schema conversion in GBNF readme, align manual grammar examp...
Olivier Chafik
2024-06-10
examples : remove --instruct remnants (#7846)
Georgi Gerganov
2024-06-10
server : improve "prompt" handling (#7847)
Georgi Gerganov
2024-06-09
imatrix : handle partial entries (#7833)
Georgi Gerganov
2024-06-09
server: do not remove whitespace at the start of a completion chunk (#7830)
mgroeber9110
2024-06-09
Revert "[SYCL] Update rpc-server.cpp to include SYCL backend (#7682)" (#7808)
slaren
2024-06-08
server : smart slot selection using Longest Common Prefix (#7728)
sasha0552
2024-06-07
gguf-split : change binary multi-byte units to decimal (#7803)
Christian Zhou-Zheng
2024-06-07
server: update cache_prompt documentation [no ci] (#7745)
Johannes Gäßler
2024-06-07
server : do not get prompt in infill mode (#7286)
woodx
2024-06-07
check for nans in imatrix and quantize (#7807)
slaren
2024-06-06
imatrix : migrate to gpt_params (#7771)
Georgi Gerganov
2024-06-06
grammars: x{min,max} repetition operator (#6640)
Olivier Chafik
2024-06-05
ggml : refactor rope norm/neox (#7634)
Georgi Gerganov
2024-06-05
readme : remove -ins (#7759)
arch-btw
2024-06-04
common : refactor cli arg parsing (#7675)
Georgi Gerganov
2024-06-04
ggml : remove OpenCL (#7735)
Georgi Gerganov
2024-06-04
llama : remove beam search (#7736)
Georgi Gerganov
2024-06-04
llama-bench : allow using a different printer for stderr with -oe (#7722)
slaren
2024-06-02
[SYCL] Update rpc-server.cpp to include SYCL backend (#7682)
nickp27
2024-06-01
server : new UI (#7633)
Yazan Agha-Schrader
2024-06-02
SimpleChat: Simple histogram/repeatMatching driven garbageTrimming, Settings ...
HanishKVC
2024-05-31
server : update js (#7670)
Georgi Gerganov
2024-05-30
Move convert.py to examples/convert-legacy-llama.py (#7430)
Galunid
2024-05-29
llama-bench : add support for the RPC backend (#7435)
Radoslav Gerganov
[prev]
[next]