summaryrefslogtreecommitdiff
path: root/llama.cpp
AgeCommit message (Expand)Author
2024-03-02llama : refactor internal quantization functions (#5830)Xuan Son Nguyen
2024-03-02llama : fix segfault from unknown model arch name (#5820)compilade
2024-03-02Support multiple GPUs (split mode) on SYCL backend (#5806)Neo Zhang Jianyu
2024-03-01llama : add StarCoder2 support (#5795)Sourab Mangrulkar
2024-03-01llama : cleanup unused mmq flags (#5772)Pierrick Hymbert
2024-03-01unicode : switch to multimap based nfd_map (#5799)Douglas Hanley
2024-02-29llama : constified `llama_set_state_data`'s `src` (#5774)Marcus Dunn
2024-02-28llama : remove deprecated API (#5770)Georgi Gerganov
2024-02-28llama : fix non-quantization of expert gating tensors (#5754)compilade
2024-02-28llama : improve BERT tokenization (#5740)Douglas Hanley
2024-02-27IQ4_XS: a 4.25 bpw quantization (#5747)Kawrakow
2024-02-27llama : fix defrag bugs + add parameter (#5735)Georgi Gerganov
2024-02-26Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range...Kawrakow
2024-02-26[SYCL] Add support for soft_max ALiBi (#5639)AidanBeltonS
2024-02-26llama : fix Gemma rope type (#5691)Georgi Gerganov
2024-02-25llama : refactor k-shift implementation + KV defragmentation (#5691)Georgi Gerganov
2024-02-25code : normalize enum names (#5697)Georgi Gerganov
2024-02-24IQ3_S: a much better alternative to Q3_K (#5676)Kawrakow
2024-02-22mpt : do not duplicate token_embd.weight on disk (#5670)Jared Van Bortel
2024-02-22gemma : use more bits for the token_embd.weight tensor (#5650)Georgi Gerganov
2024-02-22py : add Gemma conversion from HF models (#5647)Georgi Gerganov
2024-02-22Add Gemma chat template (#5665)Xuan Son Nguyen
2024-02-22minor : fix trailing whitespace (#5638)Georgi Gerganov
2024-02-22server : fallback to chatml, add AlphaMonarch chat template (#5628)Xuan Son Nguyen
2024-02-22mpt : add optional bias tensors (#5638)Dat Quoc Nguyen
2024-02-22llama : fix loading models with shared tok_embd and output (#5651)slaren
2024-02-21llama : fix session save/load with quantized KV (#5649)slaren
2024-02-21gemma : allow offloading the output tensor (#5646)slaren
2024-02-21llama : add `gemma` model (#5631)postmasters
2024-02-21IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)Kawrakow
2024-02-20Server: use llama_chat_apply_template (#5593)Xuan Son Nguyen
2024-02-19minor : fix trailing whitespace (#5538)Georgi Gerganov
2024-02-19llama : add llama_chat_apply_template() (#5538)Xuan Son Nguyen
2024-02-181.5 bit quantization (#5453)Kawrakow
2024-02-17ggml : add ALiBi support for ggml_soft_max_ext (#5488)Georgi Gerganov
2024-02-16llama : minor fixed return int value (#5529)Herman Semenov
2024-02-16ggml : add numa options (#5377)bmwl
2024-02-15Use correct type of pooling for embedding models (#5500)Douglas Hanley
2024-02-13llama : add support for Nomic Embed (#5468)Jared Van Bortel
2024-02-13llama : allow raw byte in SPM vocabs; don't crash on nl 404 (#5478)Aarni Koskela
2024-02-13llama : make load error reporting more granular (#5477)Aarni Koskela
2024-02-13tests : multi-thread the tokenizer tests (#5474)Georgi Gerganov
2024-02-13llama : support batched embeddings (#5466)Douglas Hanley
2024-02-13bert : add tests + fix quantization (#5475)Georgi Gerganov
2024-02-12llama : fix quantization when tensors are missing (#5423)Georgi Gerganov
2024-02-12sync : ggml (#5452)Georgi Gerganov
2024-02-11Add support for BERT embedding models (#5423)Douglas Hanley
2024-02-11ggml : add mmla kernels for quantized GEMM (#4966)snadampal
2024-02-09llama : do not cap thread count when MoE on CPU (#5419)Paul Tsochantaris
2024-02-08llama : do not print "offloading layers" message in CPU-only builds (#5416)slaren