index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
llama.cpp
Age
Commit message (
Expand
)
Author
2024-06-14
llama : more checks before assuming FIM tokens (#7644)
Sigbjørn Skjæret
2024-06-14
convert : add Poro-34B-chat tokenizer support (#7713)
Elaine
2024-06-13
move BLAS to a separate backend (#6210)
slaren
2024-06-07
check for nans in imatrix and quantize (#7807)
slaren
2024-06-06
Added support for . (any character) token in grammar engine. (#6467)
Clint Herron
2024-06-06
llama : add jina v2 base code (#7596)
Joan Fontanals
2024-06-05
ggml : refactor rope norm/neox (#7634)
Georgi Gerganov
2024-06-04
common : refactor cli arg parsing (#7675)
Georgi Gerganov
2024-06-04
ggml : remove OpenCL (#7735)
Georgi Gerganov
2024-06-04
llama : remove beam search (#7736)
Georgi Gerganov
2024-06-04
Per token attributes (#7685)
jaime-m-p
2024-06-03
llama : offload to RPC in addition to other backends (#7640)
Radoslav Gerganov
2024-06-03
Vulkan Mixture of Experts (MoE) support (#7628)
0cc4m
2024-06-03
llama : MiniCPM support tied embeddings (#7664)
zhangkaihuo
2024-06-03
llama : avoid double token-to-piece cache (#7654)
Georgi Gerganov
2024-06-01
CUDA: quantized KV support for FA vec (#7527)
Johannes Gäßler
2024-05-31
llama : cache llama_token_to_piece (#7587)
Georgi Gerganov
2024-05-29
ggml : fix YARN + add tests + add asserts (#7617)
Georgi Gerganov
2024-05-28
Tokenizer WPM fixes (#7500)
jaime-m-p
2024-05-28
llama : support small Granite models (#7481)
Giuseppe Scrivano
2024-05-28
Add support for DeepseekV2ForCausalLM (#7519)
fairydreaming
2024-05-28
llama : handle unknown utf8 bytes (#7588)
Georgi Gerganov
2024-05-26
llama : add Smaug 70B support (#7402)
Bartowski
2024-05-25
main : don't print special tokens with --grammar (#6923)
Justine Tunney
2024-05-25
ggml: aarch64: SVE kernels for q8_0_q8_0, q4_0_q8_0 vector dot (#7433)
Masaya, Kato
2024-05-24
Add support for ArcticForCausalLM (#7020)
fairydreaming
2024-05-23
Fix phi3 chat template confusion with zephyr (#7449)
Tristan Druyen
2024-05-23
llama : add getters for n_threads/n_threads_batch (#7464)
Daniel Bevenius
2024-05-23
ci : use Pythia models instead of OpenLlama (#7470)
Georgi Gerganov
2024-05-23
Add missing inference support for GPTNeoXForCausalLM (Pythia and GPT-NeoX bas...
fairydreaming
2024-05-23
llama : rename n_ctx -> cache.size, less confusing (#0)
Georgi Gerganov
2024-05-23
ggml : drop support for QK_K=64 (#7473)
Georgi Gerganov
2024-05-22
phi3 : duplicate rope factors in each layer (#7447)
slaren
2024-05-22
llama : add missing model type names (#7445)
Justine Tunney
2024-05-21
llama : add phi3 128K model support (#7225)
liuwei-git
2024-05-21
Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) (#7425)
jaime-m-p
2024-05-20
Tokenizer SPM fixes for phi-3 and llama-spm (#7375)
jaime-m-p
2024-05-21
llama : remove Persimmon (#7408)
Georgi Gerganov
2024-05-20
ggml-opencl, llama: using reserve() if count already known (#7272)
Herman Semenov
2024-05-20
Add provisions for windows support for BF16 code including CMake provision fo...
Srihari-mcw
2024-05-20
llama : remove MPI backend (#7395)
slaren
2024-05-19
Add StableLM2 pre-tokenizer (#7349)
Anas Ahouzi
2024-05-19
Capture CUDA logging output (#7298)
fraxy-v
2024-05-18
llama : add support for larger Granite Code Models (20B, 34B) (#7324)
Steffen Röcker
2024-05-18
Unicode codepoint flags for custom regexs (#7245)
jaime-m-p
2024-05-17
llama : use n_embd_head_v when reshaping kqv (#7327)
fairydreaming
2024-05-17
tokenization: add warning for double BOS (#7332)
Johannes Gäßler
2024-05-17
ggml-quants, llama : removed excess checks (#7274)
Herman Semenov
2024-05-16
grammar, json, llama: replace push on emplace if it possible (#7273)
Herman Semenov
2024-05-14
ggml : add RPC backend (#6829)
Radoslav Gerganov
[next]