summaryrefslogtreecommitdiff
path: root/llama.cpp
AgeCommit message (Expand)Author
2024-06-14llama : more checks before assuming FIM tokens (#7644)Sigbjørn Skjæret
2024-06-14convert : add Poro-34B-chat tokenizer support (#7713)Elaine
2024-06-13move BLAS to a separate backend (#6210)slaren
2024-06-07check for nans in imatrix and quantize (#7807)slaren
2024-06-06Added support for . (any character) token in grammar engine. (#6467)Clint Herron
2024-06-06llama : add jina v2 base code (#7596)Joan Fontanals
2024-06-05ggml : refactor rope norm/neox (#7634)Georgi Gerganov
2024-06-04common : refactor cli arg parsing (#7675)Georgi Gerganov
2024-06-04ggml : remove OpenCL (#7735)Georgi Gerganov
2024-06-04llama : remove beam search (#7736)Georgi Gerganov
2024-06-04Per token attributes (#7685)jaime-m-p
2024-06-03llama : offload to RPC in addition to other backends (#7640)Radoslav Gerganov
2024-06-03Vulkan Mixture of Experts (MoE) support (#7628)0cc4m
2024-06-03llama : MiniCPM support tied embeddings (#7664)zhangkaihuo
2024-06-03llama : avoid double token-to-piece cache (#7654)Georgi Gerganov
2024-06-01CUDA: quantized KV support for FA vec (#7527)Johannes Gäßler
2024-05-31llama : cache llama_token_to_piece (#7587)Georgi Gerganov
2024-05-29ggml : fix YARN + add tests + add asserts (#7617)Georgi Gerganov
2024-05-28Tokenizer WPM fixes (#7500)jaime-m-p
2024-05-28llama : support small Granite models (#7481)Giuseppe Scrivano
2024-05-28Add support for DeepseekV2ForCausalLM (#7519)fairydreaming
2024-05-28llama : handle unknown utf8 bytes (#7588)Georgi Gerganov
2024-05-26llama : add Smaug 70B support (#7402)Bartowski
2024-05-25main : don't print special tokens with --grammar (#6923)Justine Tunney
2024-05-25ggml: aarch64: SVE kernels for q8_0_q8_0, q4_0_q8_0 vector dot (#7433)Masaya, Kato
2024-05-24Add support for ArcticForCausalLM (#7020)fairydreaming
2024-05-23Fix phi3 chat template confusion with zephyr (#7449)Tristan Druyen
2024-05-23llama : add getters for n_threads/n_threads_batch (#7464)Daniel Bevenius
2024-05-23ci : use Pythia models instead of OpenLlama (#7470)Georgi Gerganov
2024-05-23Add missing inference support for GPTNeoXForCausalLM (Pythia and GPT-NeoX bas...fairydreaming
2024-05-23llama : rename n_ctx -> cache.size, less confusing (#0)Georgi Gerganov
2024-05-23ggml : drop support for QK_K=64 (#7473)Georgi Gerganov
2024-05-22phi3 : duplicate rope factors in each layer (#7447)slaren
2024-05-22llama : add missing model type names (#7445)Justine Tunney
2024-05-21llama : add phi3 128K model support (#7225)liuwei-git
2024-05-21Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) (#7425)jaime-m-p
2024-05-20Tokenizer SPM fixes for phi-3 and llama-spm (#7375)jaime-m-p
2024-05-21llama : remove Persimmon (#7408)Georgi Gerganov
2024-05-20ggml-opencl, llama: using reserve() if count already known (#7272)Herman Semenov
2024-05-20Add provisions for windows support for BF16 code including CMake provision fo...Srihari-mcw
2024-05-20llama : remove MPI backend (#7395)slaren
2024-05-19Add StableLM2 pre-tokenizer (#7349)Anas Ahouzi
2024-05-19Capture CUDA logging output (#7298)fraxy-v
2024-05-18llama : add support for larger Granite Code Models (20B, 34B) (#7324)Steffen Röcker
2024-05-18Unicode codepoint flags for custom regexs (#7245)jaime-m-p
2024-05-17llama : use n_embd_head_v when reshaping kqv (#7327)fairydreaming
2024-05-17tokenization: add warning for double BOS (#7332)Johannes Gäßler
2024-05-17ggml-quants, llama : removed excess checks (#7274)Herman Semenov
2024-05-16grammar, json, llama: replace push on emplace if it possible (#7273)Herman Semenov
2024-05-14ggml : add RPC backend (#6829)Radoslav Gerganov