index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
tests
Age
Commit message (
Expand
)
Author
2024-02-08
sampling: fix top_k <= 0 (#5388)
Johannes Gäßler
2024-02-08
tests : .gitignore obj files
Georgi Gerganov
2024-02-03
refactor : switch to emplace_back to avoid extra object (#5291)
Michael Klimenko
2024-01-31
llava : add MobileVLM support (#5132)
JidongZhang-THU
2024-01-30
`ggml_cuda_cpy` support for 4d tensors and float16->float32 upcasting (ggml/686)
John Balis
2024-01-30
SOTA 3-bit quants (#5196)
Kawrakow
2024-01-29
Nomic Vulkan backend (#4456)
Jared Van Bortel
2024-01-28
ggml : add unified SYCL backend for Intel GPUs (#2690)
Abhilash Majumder
2024-01-28
Tests for min_p, sampling queue (#5147)
Johannes Gäßler
2024-01-27
Remove unused data and add fixes (#5154)
Michael Klimenko
2024-01-26
tests : gitignore test-c.o
Georgi Gerganov
2024-01-26
ci : add model tests + script wrapper (#4586)
crasm
2024-01-17
ggml : add IQ2 to test-backend-ops + refactoring (#4990)
Georgi Gerganov
2024-01-17
metal : create autorelease pool during library build (#4970)
Georgi Gerganov
2024-01-14
2-bit quantizations (#4897)
Kawrakow
2024-01-12
llama : ggml-backend integration (#4766)
slaren
2024-01-11
ggml : SOTA 2-bit quants (add IQ2_XS) (#4856)
Kawrakow
2024-01-09
CUDA: faster softmax via shared memory + fp16 math (#4742)
Johannes Gäßler
2024-01-08
SOTA 2-bit quants (#4773)
Kawrakow
2024-01-04
Print backend name on test-backend-ops failure (#4751)
Johannes Gäßler
2024-01-03
ggml : extend ggml_get_rows, ggml_repeat, ggml_concat (ggml/639)
Guillaume Wenzek
2024-01-02
metal : enable shader debugging (cmake option) (#4705)
Georgi Gerganov
2023-12-29
cmake : fix ld warning duplicate libraries libllama.a (#4671)
Cuong Trinh Manh
2023-12-29
ggml : fix some mul mat cases + add tests for src1 F16 (ggml/669)
bssrdf
2023-12-28
gpt2 : Add gpt2 architecture integration (#4555)
manikbhandari
2023-12-24
cuda : improve cuda pool efficiency using virtual memory (#4606)
slaren
2023-12-21
ggml : change ggml_scale to take a float instead of tensor (#4573)
Georgi Gerganov
2023-12-18
llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)
Ebey Abraham
2023-12-14
ggml : use ggml_row_size where possible (#4472)
slaren
2023-12-13
sync : ggml (SD ops, tests, kernels) (#4444)
Georgi Gerganov
2023-12-13
llama : add Mixtral support (#4406)
slaren
2023-12-12
english : use `typos` to fix comments and logs (#4354)
Richard Kiss
2023-12-07
sync : ggml (new ops, tests, backend, etc.) (#4359)
Georgi Gerganov
2023-11-20
ci : add flake8 to github actions (python linting) (#4129)
Galunid
2023-11-17
py : remove superfluous import statements (#4076)
Jiří Podivín
2023-11-14
stablelm : StableLM support (#3586)
Galunid
2023-11-13
sync : ggml (backend v2) (#3912)
Georgi Gerganov
2023-10-30
ggml : move FP16 <-> FP32 code to ggml-impl.h (#3861)
Georgi Gerganov
2023-10-24
Add more tokenizer tests (#3742)
Galunid
2023-10-22
Add test for MPT tokenization (#3728)
goerch
2023-10-20
sampling : refactor init to use llama_sampling_params (#3696)
Georgi Gerganov
2023-10-20
gguf : support big endian platform (#3552)
Qin Yue Chen
2023-10-10
Minor improvements in GPT2 tokenizer (#3567)
goerch
2023-10-04
sync : ggml (conv 1d + 2d updates, UB fixes) (#3468)
Georgi Gerganov
2023-10-03
Work on the BPE tokenizer (#3252)
goerch
2023-09-28
build : enable more non-default compiler warnings (#3200)
Cebtenzzre
2023-09-28
llama.cpp : split llama_context_params into model and context params (#3301)
slaren
2023-09-28
train : finetune LORA (#2632)
xaedes
2023-09-28
llama : custom attention mask + parallel decoding + no context swaps (#3228)
Georgi Gerganov
2023-09-16
Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 (...
goerch
[next]