index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
llama.cpp
Age
Commit message (
Expand
)
Author
2024-02-18
1.5 bit quantization (#5453)
Kawrakow
2024-02-17
ggml : add ALiBi support for ggml_soft_max_ext (#5488)
Georgi Gerganov
2024-02-16
llama : minor fixed return int value (#5529)
Herman Semenov
2024-02-16
ggml : add numa options (#5377)
bmwl
2024-02-15
Use correct type of pooling for embedding models (#5500)
Douglas Hanley
2024-02-13
llama : add support for Nomic Embed (#5468)
Jared Van Bortel
2024-02-13
llama : allow raw byte in SPM vocabs; don't crash on nl 404 (#5478)
Aarni Koskela
2024-02-13
llama : make load error reporting more granular (#5477)
Aarni Koskela
2024-02-13
tests : multi-thread the tokenizer tests (#5474)
Georgi Gerganov
2024-02-13
llama : support batched embeddings (#5466)
Douglas Hanley
2024-02-13
bert : add tests + fix quantization (#5475)
Georgi Gerganov
2024-02-12
llama : fix quantization when tensors are missing (#5423)
Georgi Gerganov
2024-02-12
sync : ggml (#5452)
Georgi Gerganov
2024-02-11
Add support for BERT embedding models (#5423)
Douglas Hanley
2024-02-11
ggml : add mmla kernels for quantized GEMM (#4966)
snadampal
2024-02-09
llama : do not cap thread count when MoE on CPU (#5419)
Paul Tsochantaris
2024-02-08
llama : do not print "offloading layers" message in CPU-only builds (#5416)
slaren
2024-02-08
fix trailing whitespace (#5407)
Johannes Gäßler
2024-02-08
llama : fix MiniCPM (#5392)
runfuture
2024-02-08
sampling: fix top_k <= 0 (#5388)
Johannes Gäßler
2024-02-07
Basic Vulkan Multi-GPU implementation (#5321)
0cc4m
2024-02-07
llama : add MiniCPM support (#5346)
runfuture
2024-02-05
iq3_xxs: quards for the no-imatrix situation (#5334)
Kawrakow
2024-02-03
YaRN : store rope scaling type as int32_t in memory (#5285)
Jared Van Bortel
2024-02-02
llama : fix memory leak in llama_batch_free (#5252)
Ian Bull
2024-02-01
llama : support InternLM2 (#5184)
Guoteng
2024-01-31
llama : reorder build_orion() at correct place (#5118)
Georgi Gerganov
2024-01-31
llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240)
Georgi Gerganov
2024-01-30
Fix typos of IQ2_XXS and IQ3_XXS in llama.cpp (#5231)
Yiming Cui
2024-01-30
kompute : llama-bench support and ggml_cpu_has_kompute() (#5226)
Jared Van Bortel
2024-01-30
SOTA 3-bit quants (#5196)
Kawrakow
2024-01-29
kompute : fix fallback to CPU (#5201)
Jared Van Bortel
2024-01-29
Nomic Vulkan backend (#4456)
Jared Van Bortel
2024-01-29
fix typo "RLIMIT_MLOCK" (#5175)
divinity76
2024-01-28
ggml : add Vulkan backend (#2059)
0cc4m
2024-01-28
ggml : add unified SYCL backend for Intel GPUs (#2690)
Abhilash Majumder
2024-01-28
Apply min_p to unsorted tokens (#5115)
Johannes Gäßler
2024-01-28
Tests for min_p, sampling queue (#5147)
Johannes Gäßler
2024-01-28
llama : add support for Orion-14B (#5118)
sharpHL
2024-01-26
Another bucket sort (#5109)
Kawrakow
2024-01-25
llama : dynamic temperature sampling (#4972)
l3utterfly
2024-01-25
Fix Q3_K_XS for MoE models (#5113)
Kawrakow
2024-01-24
llama : pre-allocate input tensors in a separate buffer (#5100)
slaren
2024-01-23
minor : clean-up some warnings and style (#5094)
Georgi Gerganov
2024-01-22
llama : fix not enough space in buffer with Qwen (#5086)
slaren
2024-01-22
llama : support StableLM 2 1.6B (#5052)
compilade
2024-01-22
llama : add Q3_K_XS (#5060)
Kawrakow
2024-01-22
llama : add more qwen2 models (#5071)
Shijie
2024-01-20
llama : run all KQV ops on the CPU with no KV offload (#5049)
slaren
2024-01-19
llama : support upcoming Qwen2 (#5037)
Shijie
[next]