summaryrefslogtreecommitdiff
path: root/llama.cpp
AgeCommit message (Expand)Author
2024-05-21Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) (#7425)jaime-m-p
2024-05-20Tokenizer SPM fixes for phi-3 and llama-spm (#7375)jaime-m-p
2024-05-21llama : remove Persimmon (#7408)Georgi Gerganov
2024-05-20ggml-opencl, llama: using reserve() if count already known (#7272)Herman Semenov
2024-05-20Add provisions for windows support for BF16 code including CMake provision fo...Srihari-mcw
2024-05-20llama : remove MPI backend (#7395)slaren
2024-05-19Add StableLM2 pre-tokenizer (#7349)Anas Ahouzi
2024-05-19Capture CUDA logging output (#7298)fraxy-v
2024-05-18llama : add support for larger Granite Code Models (20B, 34B) (#7324)Steffen Röcker
2024-05-18Unicode codepoint flags for custom regexs (#7245)jaime-m-p
2024-05-17llama : use n_embd_head_v when reshaping kqv (#7327)fairydreaming
2024-05-17tokenization: add warning for double BOS (#7332)Johannes Gäßler
2024-05-17ggml-quants, llama : removed excess checks (#7274)Herman Semenov
2024-05-16grammar, json, llama: replace push on emplace if it possible (#7273)Herman Semenov
2024-05-14ggml : add RPC backend (#6829)Radoslav Gerganov
2024-05-14llama : disable pipeline parallelism with nkvo (#7265)slaren
2024-05-14Add left recursion check: quit early instead of going into an infinite loop (...Haggai Nuchi
2024-05-13llama : less KV padding when FA is off (#7257)Georgi Gerganov
2024-05-13llama : rename jina tokenizers to v2 (#7249)Joan Fontanals
2024-05-11llama : lookup word in vocab before doing BPE merges (#7193)Haoxiang Fei
2024-05-11llama : add Jina Embeddings architecture (#6826)Joan Fontanals
2024-05-11ggml : full ALiBi support (#7192)Georgi Gerganov
2024-05-10llama : use n_vocab to differentiate between mistral 7B and llama3 8B (#7200)slaren
2024-05-09llama3 custom regex split (#6965)jaime-m-p
2024-05-09CUDA: generalize FP16 fattn vec kernel (#7061)Johannes Gäßler
2024-05-09llama : update llama_timings.n_p_eval setting (#7160)Daniel Bevenius
2024-05-08llama : add BPE pre-tokenization for Qwen2 (#7114)Ren Xuancheng
2024-05-08convert : add BPE pre-tokenization for DBRX (#7132)DAN™
2024-05-08ggml : introduce bfloat16 support (#6412)Justine Tunney
2024-05-07Fix OLMo HF to GGUF conversion (#6910)nopperl
2024-05-05command-r : add BPE pre-tokenization (#7063)DAN™
2024-05-04tests : add test-tokenizer-0.sh + fix some tokenizers (#7036)Georgi Gerganov
2024-05-02chore: fix typo in llama.cpp (#7032)alwqx
2024-04-30ggml : add Flash Attention (#5021)Georgi Gerganov
2024-04-29llama : fix BPE pre-tokenization (#6920)Georgi Gerganov
2024-04-29llama : fix typo LAMMAFILE -> LLAMAFILE (#6974)Johannes Gäßler
2024-04-28gguf : enforce that tensor names are unique (#6905)Xuan Son Nguyen
2024-04-26Reset schedule earlier to allow overlap with ggml graph computation on device...agray3
2024-04-26quantize: add imatrix and dataset metadata in GGUF (#6658)Pierrick Hymbert
2024-04-26add basic tensor data validation function (#6884)slaren
2024-04-25cmake : restore LLAMA_LLAMAFILE_DEFAULTGeorgi Gerganov
2024-04-25llama : synchronize before get/set session data (#6911)slaren
2024-04-25llama : check that all the tensor data is in the model file (#6885)slaren
2024-04-25tests : minor bash stuff (#6902)Georgi Gerganov
2024-04-25quantize : add '--keep-split' to quantize model into shards (#6688)jiez
2024-04-24llama : add llama_get_pooling_type function (#6862)Douglas Hanley
2024-04-24Server: fix seed for multiple slots (#6835)Johannes Gäßler
2024-04-24llama : add phi 3 chat template (#6857)Tristan Druyen
2024-04-24llama : add phi3 support (#6852)liuwei-git
2024-04-22llama : fix typo in <|im_end|> token text (#6745)Georgi Gerganov