index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
llama.cpp
Age
Commit message (
Expand
)
Author
2023-09-27
gguf : fix a few general keys (#3341)
Cebtenzzre
2023-09-27
metal : reusing llama.cpp logging (#3152)
Rickard Hallerbäck
2023-09-21
CUDA: use only 1 thread if fully offloaded (#2915)
Johannes Gäßler
2023-09-20
llama : allow gguf RoPE keys to be overridden with defaults (#3240)
Cebtenzzre
2023-09-17
llama.cpp : show model size and BPW on load (#3223)
slaren
2023-09-16
Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 (...
goerch
2023-09-15
check C++ code with -Wmissing-declarations (#3184)
Cebtenzzre
2023-09-15
llama : add support for StarCoder model architectures (#3187)
Meng Zhang
2023-09-15
metal : relax conditions on fast matrix multiplication kernel (#3168)
Georgi Gerganov
2023-09-14
llama : make quantize example up to 2.7x faster (#3115)
Cebtenzzre
2023-09-14
feature : support Baichuan serial models (#3009)
jameswu2014
2023-09-13
whisper : tokenizer fix + re-enable tokenizer test for LLaMa (#3096)
goerch
2023-09-08
examples : make n_ctx warning work again (#3066)
Cebtenzzre
2023-09-08
build : do not use _GNU_SOURCE gratuitously (#2035)
Przemysław Pawełczyk
2023-09-08
enable CPU HBM (#2603)
Kunshang Ji
2023-09-07
fix some warnings from gcc and clang-tidy (#3038)
Cebtenzzre
2023-09-07
ggml : posixify madvise and pagesize (#3037)
Przemysław Pawełczyk
2023-09-05
llama : update logic for number of threads when using BLAS
Georgi Gerganov
2023-09-05
speculative : add grammar support (#2991)
Georgi Gerganov
2023-09-04
build : on Mac OS enable Metal by default (#2901)
Georgi Gerganov
2023-09-03
llama : fix bpe tokenize from byte (#2889)
opparco
2023-09-03
examples : fix gpt-neox (#2943)
momonga
2023-09-01
Allow quantize to only copy tensors, some other improvements (#2931)
Kerfuffle
2023-09-01
minor : add const qualifiers (#2853)
m3ndax
2023-09-01
build : fix most gcc and clang warnings (#2861)
Cebtenzzre
2023-08-31
@vxiiduu's fix for PrefetchVirtualMemory (#2930)
DannyDaemonic
2023-08-30
CUDA: mul_mat_q=true llama_context_params default (#2912)
Johannes Gäßler
2023-08-29
10X faster BPE tokenizer (#2876)
Kawrakow
2023-08-28
train : mem usage and other improvements (#2439)
xaedes
2023-08-28
YAML result logging + preset script (#2657)
Johannes Gäßler
2023-08-28
llama.cpp : fix wrong vsnprintf call in MS compiler (#2856)
grahameth
2023-08-27
llama : fix MPI threads (close #2827)
Georgi Gerganov
2023-08-27
llama : speedup tokenization (#2831)
Kawrakow
2023-08-27
falcon : fix CUDA inference by making K and Q contiguous (#2830)
Georgi Gerganov
2023-08-27
k_quants tuning for Falcon-7b (#2816)
Kawrakow
2023-08-27
gguf : add 64-bit support (GGUF v2) (#2821)
Georgi Gerganov
2023-08-27
llama : more tokenizer fixes (#2810)
Georgi Gerganov
2023-08-27
ggml : detect SSSE3 (#2825)
Przemysław Pawełczyk
2023-08-26
llama : use Unicode Escape Sequence to replace encoded characters (#2814)
Tim Miller
2023-08-26
llama : move #includes out of _GNU_SOURCE conditional (#2817)
Cebtenzzre
2023-08-26
llama : use std::abs in llama_sample_tail_free (#2800)
Cebtenzzre
2023-08-26
k-quants : remove unnecessary tensor shape restrictions (#2811)
Georgi Gerganov
2023-08-26
Better perplexity for 2- and 3-bit quantization for LLaMA-v2-70B (#2807)
Kawrakow
2023-08-26
Fix spm whitespaces (#2806)
klosax
2023-08-25
llama : add llama_beam_search() (#2267)
Matt Pulver
2023-08-25
llama-bench : add model sizes (#2771)
slaren
2023-08-25
ROCm Port (#1087)
Henri Vasserman
2023-08-25
cuda : add RoPE kernel for mode == 2 (NeoX) (#2760)
Georgi Gerganov
2023-08-24
gguf : add rope_freq_base parameter for CodeLlama (#2769)
slaren
2023-08-24
metal : bug-fix when enable ggml-alloc (#2757)
Shouzheng Liu
[next]