index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
llama.cpp
Age
Commit message (
Expand
)
Author
2024-03-02
llama : refactor internal quantization functions (#5830)
Xuan Son Nguyen
2024-03-02
llama : fix segfault from unknown model arch name (#5820)
compilade
2024-03-02
Support multiple GPUs (split mode) on SYCL backend (#5806)
Neo Zhang Jianyu
2024-03-01
llama : add StarCoder2 support (#5795)
Sourab Mangrulkar
2024-03-01
llama : cleanup unused mmq flags (#5772)
Pierrick Hymbert
2024-03-01
unicode : switch to multimap based nfd_map (#5799)
Douglas Hanley
2024-02-29
llama : constified `llama_set_state_data`'s `src` (#5774)
Marcus Dunn
2024-02-28
llama : remove deprecated API (#5770)
Georgi Gerganov
2024-02-28
llama : fix non-quantization of expert gating tensors (#5754)
compilade
2024-02-28
llama : improve BERT tokenization (#5740)
Douglas Hanley
2024-02-27
IQ4_XS: a 4.25 bpw quantization (#5747)
Kawrakow
2024-02-27
llama : fix defrag bugs + add parameter (#5735)
Georgi Gerganov
2024-02-26
Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range...
Kawrakow
2024-02-26
[SYCL] Add support for soft_max ALiBi (#5639)
AidanBeltonS
2024-02-26
llama : fix Gemma rope type (#5691)
Georgi Gerganov
2024-02-25
llama : refactor k-shift implementation + KV defragmentation (#5691)
Georgi Gerganov
2024-02-25
code : normalize enum names (#5697)
Georgi Gerganov
2024-02-24
IQ3_S: a much better alternative to Q3_K (#5676)
Kawrakow
2024-02-22
mpt : do not duplicate token_embd.weight on disk (#5670)
Jared Van Bortel
2024-02-22
gemma : use more bits for the token_embd.weight tensor (#5650)
Georgi Gerganov
2024-02-22
py : add Gemma conversion from HF models (#5647)
Georgi Gerganov
2024-02-22
Add Gemma chat template (#5665)
Xuan Son Nguyen
2024-02-22
minor : fix trailing whitespace (#5638)
Georgi Gerganov
2024-02-22
server : fallback to chatml, add AlphaMonarch chat template (#5628)
Xuan Son Nguyen
2024-02-22
mpt : add optional bias tensors (#5638)
Dat Quoc Nguyen
2024-02-22
llama : fix loading models with shared tok_embd and output (#5651)
slaren
2024-02-21
llama : fix session save/load with quantized KV (#5649)
slaren
2024-02-21
gemma : allow offloading the output tensor (#5646)
slaren
2024-02-21
llama : add `gemma` model (#5631)
postmasters
2024-02-21
IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)
Kawrakow
2024-02-20
Server: use llama_chat_apply_template (#5593)
Xuan Son Nguyen
2024-02-19
minor : fix trailing whitespace (#5538)
Georgi Gerganov
2024-02-19
llama : add llama_chat_apply_template() (#5538)
Xuan Son Nguyen
2024-02-18
1.5 bit quantization (#5453)
Kawrakow
2024-02-17
ggml : add ALiBi support for ggml_soft_max_ext (#5488)
Georgi Gerganov
2024-02-16
llama : minor fixed return int value (#5529)
Herman Semenov
2024-02-16
ggml : add numa options (#5377)
bmwl
2024-02-15
Use correct type of pooling for embedding models (#5500)
Douglas Hanley
2024-02-13
llama : add support for Nomic Embed (#5468)
Jared Van Bortel
2024-02-13
llama : allow raw byte in SPM vocabs; don't crash on nl 404 (#5478)
Aarni Koskela
2024-02-13
llama : make load error reporting more granular (#5477)
Aarni Koskela
2024-02-13
tests : multi-thread the tokenizer tests (#5474)
Georgi Gerganov
2024-02-13
llama : support batched embeddings (#5466)
Douglas Hanley
2024-02-13
bert : add tests + fix quantization (#5475)
Georgi Gerganov
2024-02-12
llama : fix quantization when tensors are missing (#5423)
Georgi Gerganov
2024-02-12
sync : ggml (#5452)
Georgi Gerganov
2024-02-11
Add support for BERT embedding models (#5423)
Douglas Hanley
2024-02-11
ggml : add mmla kernels for quantized GEMM (#4966)
snadampal
2024-02-09
llama : do not cap thread count when MoE on CPU (#5419)
Paul Tsochantaris
2024-02-08
llama : do not print "offloading layers" message in CPU-only builds (#5416)
slaren
[next]