index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
llama.cpp
Age
Commit message (
Expand
)
Author
2024-05-14
Add left recursion check: quit early instead of going into an infinite loop (...
Haggai Nuchi
2024-05-13
llama : less KV padding when FA is off (#7257)
Georgi Gerganov
2024-05-13
llama : rename jina tokenizers to v2 (#7249)
Joan Fontanals
2024-05-11
llama : lookup word in vocab before doing BPE merges (#7193)
Haoxiang Fei
2024-05-11
llama : add Jina Embeddings architecture (#6826)
Joan Fontanals
2024-05-11
ggml : full ALiBi support (#7192)
Georgi Gerganov
2024-05-10
llama : use n_vocab to differentiate between mistral 7B and llama3 8B (#7200)
slaren
2024-05-09
llama3 custom regex split (#6965)
jaime-m-p
2024-05-09
CUDA: generalize FP16 fattn vec kernel (#7061)
Johannes Gäßler
2024-05-09
llama : update llama_timings.n_p_eval setting (#7160)
Daniel Bevenius
2024-05-08
llama : add BPE pre-tokenization for Qwen2 (#7114)
Ren Xuancheng
2024-05-08
convert : add BPE pre-tokenization for DBRX (#7132)
DAN™
2024-05-08
ggml : introduce bfloat16 support (#6412)
Justine Tunney
2024-05-07
Fix OLMo HF to GGUF conversion (#6910)
nopperl
2024-05-05
command-r : add BPE pre-tokenization (#7063)
DAN™
2024-05-04
tests : add test-tokenizer-0.sh + fix some tokenizers (#7036)
Georgi Gerganov
2024-05-02
chore: fix typo in llama.cpp (#7032)
alwqx
2024-04-30
ggml : add Flash Attention (#5021)
Georgi Gerganov
2024-04-29
llama : fix BPE pre-tokenization (#6920)
Georgi Gerganov
2024-04-29
llama : fix typo LAMMAFILE -> LLAMAFILE (#6974)
Johannes Gäßler
2024-04-28
gguf : enforce that tensor names are unique (#6905)
Xuan Son Nguyen
2024-04-26
Reset schedule earlier to allow overlap with ggml graph computation on device...
agray3
2024-04-26
quantize: add imatrix and dataset metadata in GGUF (#6658)
Pierrick Hymbert
2024-04-26
add basic tensor data validation function (#6884)
slaren
2024-04-25
cmake : restore LLAMA_LLAMAFILE_DEFAULT
Georgi Gerganov
2024-04-25
llama : synchronize before get/set session data (#6911)
slaren
2024-04-25
llama : check that all the tensor data is in the model file (#6885)
slaren
2024-04-25
tests : minor bash stuff (#6902)
Georgi Gerganov
2024-04-25
quantize : add '--keep-split' to quantize model into shards (#6688)
jiez
2024-04-24
llama : add llama_get_pooling_type function (#6862)
Douglas Hanley
2024-04-24
Server: fix seed for multiple slots (#6835)
Johannes Gäßler
2024-04-24
llama : add phi 3 chat template (#6857)
Tristan Druyen
2024-04-24
llama : add phi3 support (#6852)
liuwei-git
2024-04-22
llama : fix typo in <|im_end|> token text (#6745)
Georgi Gerganov
2024-04-21
llama : add option to render special/control tokens (#6807)
Georgi Gerganov
2024-04-21
llama : add llama-3 chat template (#6751)
Wouter
2024-04-21
llama : support Llama 3 HF conversion (#6745)
Pedro Cuenca
2024-04-19
Implement the OLMo architecture (#6741)
nopperl
2024-04-18
ggml : group all experts in a single ggml_mul_mat_id (#6505)
slaren
2024-04-18
Qwen2 : assume tied weights if lm_head/output weights is missing (#6738)
Ren Xuancheng
2024-04-18
llama : fix compatibility with old 2 expert models (#6735)
slaren
2024-04-16
llama : make general.name optional (#6709)
Georgi Gerganov
2024-04-16
llama : add StableLM2 12B (#6635)
Ashish
2024-04-16
llama : add qwen2moe (#6074)
Shijie
2024-04-16
gguf : add special tokens metadata for FIM/Infill (#6689)
Daniel Bevenius
2024-04-15
llama : fix restoring the number of outputs from state files (#6687)
compilade
2024-04-14
llama : add missing kv clear in llama_beam_search (#6664)
David Renshaw
2024-04-14
Add Command R chat template (#6650)
Chao Jiang
2024-04-13
model: support arch `DbrxForCausalLM` (#6515)
Pierrick Hymbert
2024-04-12
llama : add gguf_remove_key + remove split meta during quantize (#6591)
jiez
[next]