summaryrefslogtreecommitdiff
path: root/tests
AgeCommit message (Expand)Author
2024-03-25tests : include IQ2_XXS and IQ2_XS in test-quantize-fns (#6303)Kawrakow
2024-03-22tests : conditional python & node json schema tests (#6207)Olivier Chafik
2024-03-22json-schema-to-grammar : fix order of props + non-str const/enum (#6232)Olivier Chafik
2024-03-22metal : pad n_ctx by 32 (#6177)Georgi Gerganov
2024-03-21tests : disable system() calls (#6198)Georgi Gerganov
2024-03-21json-schema-to-grammar improvements (+ added to server) (#5978)Olivier Chafik
2024-03-15llama : add Orion chat template (#6066)Xuan Son Nguyen
2024-03-13test-backend-ops : skip CPU backend by default (#6028)slaren
2024-03-11llama : refactor unicode stuff (#5992)Georgi Gerganov
2024-03-09ggml : remove old quantization functions (#5942)Georgi Gerganov
2024-03-09tests : gitignore ggml-common.hGeorgi Gerganov
2024-03-04add some new ops, fix some operators and add batch operations to certain oper...leejet
2024-02-27IQ4_XS: a 4.25 bpw quantization (#5747)Kawrakow
2024-02-26Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range...Kawrakow
2024-02-25code : normalize enum names (#5697)Georgi Gerganov
2024-02-24IQ3_S: a much better alternative to Q3_K (#5676)Kawrakow
2024-02-22Add Gemma chat template (#5665)Xuan Son Nguyen
2024-02-22server : fallback to chatml, add AlphaMonarch chat template (#5628)Xuan Son Nguyen
2024-02-21IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)Kawrakow
2024-02-19llama : add llama_chat_apply_template() (#5538)Xuan Son Nguyen
2024-02-18ggml, common, examples, tests : fixed type arguments in printf (#5528)Herman Semenov
2024-02-181.5 bit quantization (#5453)Kawrakow
2024-02-17ggml : add ALiBi support for ggml_soft_max_ext (#5488)Georgi Gerganov
2024-02-16ggml : add numa options (#5377)bmwl
2024-02-13tests : multi-thread the tokenizer tests (#5474)Georgi Gerganov
2024-02-13tests : disable moe test (#5473)Georgi Gerganov
2024-02-11ggml : add mmla kernels for quantized GEMM (#4966)snadampal
2024-02-08sampling: fix top_k <= 0 (#5388)Johannes Gäßler
2024-02-08tests : .gitignore obj filesGeorgi Gerganov
2024-02-03refactor : switch to emplace_back to avoid extra object (#5291)Michael Klimenko
2024-01-31llava : add MobileVLM support (#5132)JidongZhang-THU
2024-01-30`ggml_cuda_cpy` support for 4d tensors and float16->float32 upcasting (ggml/686)John Balis
2024-01-30SOTA 3-bit quants (#5196)Kawrakow
2024-01-29Nomic Vulkan backend (#4456)Jared Van Bortel
2024-01-28ggml : add unified SYCL backend for Intel GPUs (#2690)Abhilash Majumder
2024-01-28Tests for min_p, sampling queue (#5147)Johannes Gäßler
2024-01-27Remove unused data and add fixes (#5154)Michael Klimenko
2024-01-26tests : gitignore test-c.oGeorgi Gerganov
2024-01-26ci : add model tests + script wrapper (#4586)crasm
2024-01-17ggml : add IQ2 to test-backend-ops + refactoring (#4990)Georgi Gerganov
2024-01-17metal : create autorelease pool during library build (#4970)Georgi Gerganov
2024-01-142-bit quantizations (#4897)Kawrakow
2024-01-12llama : ggml-backend integration (#4766)slaren
2024-01-11ggml : SOTA 2-bit quants (add IQ2_XS) (#4856)Kawrakow
2024-01-09CUDA: faster softmax via shared memory + fp16 math (#4742)Johannes Gäßler
2024-01-08SOTA 2-bit quants (#4773)Kawrakow
2024-01-04Print backend name on test-backend-ops failure (#4751)Johannes Gäßler
2024-01-03ggml : extend ggml_get_rows, ggml_repeat, ggml_concat (ggml/639)Guillaume Wenzek
2024-01-02metal : enable shader debugging (cmake option) (#4705)Georgi Gerganov
2023-12-29cmake : fix ld warning duplicate libraries libllama.a (#4671)Cuong Trinh Manh