index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
tests
Age
Commit message (
Expand
)
Author
2024-08-27
Faster Gemma2 (#27)
Kawrakow
2024-08-12
Merge mainline - Aug 12 2024 (#17)
Kawrakow
2024-07-27
Merge mainline llama.cpp (#3)
Kawrakow
2024-06-22
bitnet: qnfs tests
Iwan Kawrakow
2024-06-21
JSON Schema to GBNF integration tests (#7790)
Clint Herron
2024-06-18
tokenizer : BPE fixes (#7530)
jaime-m-p
2024-06-17
Add support for sqrt on CUDA (#7953)
Calvin Laurenson
2024-06-12
tests : add non-cont unary tests (#7857)
Georgi Gerganov
2024-06-11
tests : check the Python version (#7872)
Georgi Gerganov
2024-06-11
json: refine constraint for whitespace to avoid runaways yet allow pretty pri...
Olivier Chafik
2024-06-11
`json`: document schema conversion in GBNF readme, align manual grammar examp...
Olivier Chafik
2024-06-06
Added support for . (any character) token in grammar engine. (#6467)
Clint Herron
2024-06-06
grammars: x{min,max} repetition operator (#6640)
Olivier Chafik
2024-06-05
ggml : refactor rope norm/neox (#7634)
Georgi Gerganov
2024-06-04
Per token attributes (#7685)
jaime-m-p
2024-06-01
Fix FlashAttention debug test, FP32 assert (#7684)
Johannes Gäßler
2024-06-01
CUDA: quantized KV support for FA vec (#7527)
Johannes Gäßler
2024-05-31
ggml : fix loongson compile warnings (#7537)
Georgi Gerganov
2024-05-29
ggml : fix YARN + add tests + add asserts (#7617)
Georgi Gerganov
2024-05-29
cuda : non-cont concat support (#7610)
Georgi Gerganov
2024-05-28
Tokenizer WPM fixes (#7500)
jaime-m-p
2024-05-28
tests : fix test-tokenizer-0.sh
Georgi Gerganov
2024-05-28
ggml : generalize GGML_OP_CONCAT (#7563)
Georgi Gerganov
2024-05-23
Fix phi3 chat template confusion with zephyr (#7449)
Tristan Druyen
2024-05-23
ggml : remove ggml_flash_attn and ggml_flash_ff (#7463)
Georgi Gerganov
2024-05-22
cuda : fix rope + add tests (#7452)
Georgi Gerganov
2024-05-21
llama : add phi3 128K model support (#7225)
liuwei-git
2024-05-21
tests : test-tokenizer-0.sh print more info (#7402)
Georgi Gerganov
2024-05-21
Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) (#7425)
jaime-m-p
2024-05-20
Tokenizer SPM fixes for phi-3 and llama-spm (#7375)
jaime-m-p
2024-05-18
ggml : fix quants nans when all the group weights are very close to zero (#7313)
slaren
2024-05-18
Unicode codepoint flags for custom regexs (#7245)
jaime-m-p
2024-05-15
ggml : add `ggml_upscale_ext` (ggml/814)
John Balis
2024-05-14
metal : support FA without mask + add asserts (#7278)
Georgi Gerganov
2024-05-14
Add left recursion check: quit early instead of going into an infinite loop (...
Haggai Nuchi
2024-05-12
CUDA: add FP32 FlashAttention vector kernel (#7188)
Johannes Gäßler
2024-05-11
llama : lookup word in vocab before doing BPE merges (#7193)
Haoxiang Fei
2024-05-11
ggml : full ALiBi support (#7192)
Georgi Gerganov
2024-05-09
llama3 custom regex split (#6965)
jaime-m-p
2024-05-09
CUDA: generalize FP16 fattn vec kernel (#7061)
Johannes Gäßler
2024-05-08
JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143)
Johannes Gäßler
2024-05-08
llama : add BPE pre-tokenization for Qwen2 (#7114)
Ren Xuancheng
2024-05-08
ggml : introduce bfloat16 support (#6412)
Justine Tunney
2024-05-05
command-r : add BPE pre-tokenization (#7063)
DAN™
2024-05-05
py : logging and flake8 suppression refactoring (#7081)
Brian
2024-05-04
tests : add test-tokenizer-0.sh + fix some tokenizers (#7036)
Georgi Gerganov
2024-05-03
convert.py : add python logging instead of print() (#6511)
Brian
2024-04-30
ggml : add Flash Attention (#5021)
Georgi Gerganov
2024-04-29
Extending grammar integration tests (#6644)
Clint Herron
2024-04-29
llama : fix BPE pre-tokenization (#6920)
Georgi Gerganov
[next]