summaryrefslogtreecommitdiff
path: root/llama.cpp
AgeCommit message (Expand)Author
2024-02-181.5 bit quantization (#5453)Kawrakow
2024-02-17ggml : add ALiBi support for ggml_soft_max_ext (#5488)Georgi Gerganov
2024-02-16llama : minor fixed return int value (#5529)Herman Semenov
2024-02-16ggml : add numa options (#5377)bmwl
2024-02-15Use correct type of pooling for embedding models (#5500)Douglas Hanley
2024-02-13llama : add support for Nomic Embed (#5468)Jared Van Bortel
2024-02-13llama : allow raw byte in SPM vocabs; don't crash on nl 404 (#5478)Aarni Koskela
2024-02-13llama : make load error reporting more granular (#5477)Aarni Koskela
2024-02-13tests : multi-thread the tokenizer tests (#5474)Georgi Gerganov
2024-02-13llama : support batched embeddings (#5466)Douglas Hanley
2024-02-13bert : add tests + fix quantization (#5475)Georgi Gerganov
2024-02-12llama : fix quantization when tensors are missing (#5423)Georgi Gerganov
2024-02-12sync : ggml (#5452)Georgi Gerganov
2024-02-11Add support for BERT embedding models (#5423)Douglas Hanley
2024-02-11ggml : add mmla kernels for quantized GEMM (#4966)snadampal
2024-02-09llama : do not cap thread count when MoE on CPU (#5419)Paul Tsochantaris
2024-02-08llama : do not print "offloading layers" message in CPU-only builds (#5416)slaren
2024-02-08fix trailing whitespace (#5407)Johannes Gäßler
2024-02-08llama : fix MiniCPM (#5392)runfuture
2024-02-08sampling: fix top_k <= 0 (#5388)Johannes Gäßler
2024-02-07Basic Vulkan Multi-GPU implementation (#5321)0cc4m
2024-02-07llama : add MiniCPM support (#5346)runfuture
2024-02-05iq3_xxs: quards for the no-imatrix situation (#5334)Kawrakow
2024-02-03YaRN : store rope scaling type as int32_t in memory (#5285)Jared Van Bortel
2024-02-02llama : fix memory leak in llama_batch_free (#5252)Ian Bull
2024-02-01llama : support InternLM2 (#5184)Guoteng
2024-01-31llama : reorder build_orion() at correct place (#5118)Georgi Gerganov
2024-01-31llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240)Georgi Gerganov
2024-01-30Fix typos of IQ2_XXS and IQ3_XXS in llama.cpp (#5231)Yiming Cui
2024-01-30kompute : llama-bench support and ggml_cpu_has_kompute() (#5226)Jared Van Bortel
2024-01-30SOTA 3-bit quants (#5196)Kawrakow
2024-01-29kompute : fix fallback to CPU (#5201)Jared Van Bortel
2024-01-29Nomic Vulkan backend (#4456)Jared Van Bortel
2024-01-29fix typo "RLIMIT_MLOCK" (#5175)divinity76
2024-01-28ggml : add Vulkan backend (#2059)0cc4m
2024-01-28ggml : add unified SYCL backend for Intel GPUs (#2690)Abhilash Majumder
2024-01-28Apply min_p to unsorted tokens (#5115)Johannes Gäßler
2024-01-28Tests for min_p, sampling queue (#5147)Johannes Gäßler
2024-01-28llama : add support for Orion-14B (#5118)sharpHL
2024-01-26Another bucket sort (#5109)Kawrakow
2024-01-25llama : dynamic temperature sampling (#4972)l3utterfly
2024-01-25Fix Q3_K_XS for MoE models (#5113)Kawrakow
2024-01-24llama : pre-allocate input tensors in a separate buffer (#5100)slaren
2024-01-23minor : clean-up some warnings and style (#5094)Georgi Gerganov
2024-01-22llama : fix not enough space in buffer with Qwen (#5086)slaren
2024-01-22llama : support StableLM 2 1.6B (#5052)compilade
2024-01-22llama : add Q3_K_XS (#5060)Kawrakow
2024-01-22llama : add more qwen2 models (#5071)Shijie
2024-01-20llama : run all KQV ops on the CPU with no KV offload (#5049)slaren
2024-01-19llama : support upcoming Qwen2 (#5037)Shijie