summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-05-14server: free sampling contexts on exit (#7264)Steve Grubb
* server: free sampling contexts on exit This cleans up last leak found by the address sanitizer. * fix whitespace * fix whitespace
2024-05-14Revert "move ndk code to a new library (#6951)" (#7282)Brian
This reverts commit efc8f767c8c8c749a245dd96ad4e2f37c164b54c.
2024-05-14ggml : add RPC backend (#6829)Radoslav Gerganov
* ggml : add RPC backend The RPC backend proxies all operations to a remote server which runs a regular backend (CPU, CUDA, Metal, etc). * set TCP_NODELAY * add CI workflows * Address review comments * fix warning * implement llama_max_devices() for RPC * Address review comments * Address review comments * wrap sockfd into a struct * implement get_alignment and get_max_size * add get_device_memory * fix warning * win32 support * add README * readme : trim trailing whitespace * Address review comments * win32 fix * Address review comments * fix compile warnings on macos
2024-05-14llama : disable pipeline parallelism with nkvo (#7265)slaren
2024-05-14move ndk code to a new library (#6951)Elton Kola
2024-05-14Add left recursion check: quit early instead of going into an infinite loop ↵Haggai Nuchi
(#7083) * Add left recursion check: quit early instead of going into an infinite loop * Remove custom enum, rename left recursion check and move to "grammar internal" section, add handling for edge case where a leftmost nonterminal may be empty * Remove unnecessary declaration
2024-05-14docs: Fix typo and update description for --embeddings flag (#7026)Ryuei
- Change '--embedding' to '--embeddings' in the README - Update the description to match the latest --help output - Added a caution about defining physical batch size
2024-05-13convert-hf : support direct Q8_0 conversion (#7234)compilade
* convert-hf : support q8_0 conversion * convert-hf : add missing ftype This was messing with the checksums otherwise. * convert-hf : add missing ftype to Baichuan and Xverse I didn't notice these on my first pass.
2024-05-13llama : less KV padding when FA is off (#7257)Georgi Gerganov
ggml-ci
2024-05-14llava-cli: fix base64 prompt (#7248)k.h.lai
2024-05-13perplexity: add BF16 vs. FP16 results (#7150)Johannes Gäßler
2024-05-13[SYCL] rm wait() (#7233)Neo Zhang
2024-05-13llama : rename jina tokenizers to v2 (#7249)Joan Fontanals
* refactor: rename jina tokenizers to v2 * refactor: keep refactoring non-breaking
2024-05-13convert.py: Outfile default name change and additional metadata support (#4858)Brian
* convert.py: Outfile default name change and additional metadata support * convert.py: don't stringify Metadata load method output * convert.py: typo fix * convert.py: fix metadata format to sync with LLM_KV_NAMES in llama.cpp
2024-05-13change default temperature of OAI compat API from 0 to 1 (#7226)Benjamin Findley
* change default temperature of OAI compat API from 0 to 1 * make tests explicitly send temperature to OAI API
2024-05-13[SYCL] Add oneapi runtime dll files to win release package (#7241)Neo Zhang
* add oneapi running time dlls to release package * fix path * fix path * fix path * fix path * fix path --------- Co-authored-by: Zhang <jianyu.zhang@intel.com>
2024-05-13[SYCL] update CI with oneapi 2024.1 (#7235)Neo Zhang
Co-authored-by: Zhang <jianyu.zhang@intel.com>
2024-05-12CUDA: add FP32 FlashAttention vector kernel (#7188)Johannes Gäßler
* CUDA: add FP32 FlashAttention vector kernel * fixup! CUDA: add FP32 FlashAttention vector kernel * fixup! fixup! CUDA: add FP32 FlashAttention vector kernel * fixup! fixup! fixup! CUDA: add FP32 FlashAttention vector kernel
2024-05-12cmake : fix version cmp (#7227)Georgi Gerganov
2024-05-12remove convert-lora-to-ggml.py (#7204)slaren
2024-05-11metal : fix warnings (skipme) (#0)Georgi Gerganov
2024-05-11sync : ggmlGeorgi Gerganov
2024-05-11metal : fix indent (ggml/0)Georgi Gerganov
2024-05-11ggml : resolve merge (ggml/0)Georgi Gerganov
ggml-ci
2024-05-12Scripting & documenting debugging one test without anything else in the ↵Josh Ramer
loop. (#7096) * A little documentation that shares my quick tips for working in the repository. * Update startup-testing-debugging.md * script that shows a menu of tests to pick from & run the debugger on * debug-test.sh: Refactor CLI help message * debug-test.sh: documentation update * debug-test.sh: CLI Help output corrections * debug-test.sh: minor doc fix --------- authored-by: Josh Ramer <ubuntu@ip-172-31-32-53.ec2.internal> Assisted-by: brian khuu <mofosyne@gmail.com>
2024-05-11fix system prompt handling (#7153)Xuan Son Nguyen
2024-05-11convert-hf : support bfloat16 conversion (#7158)compilade
* convert-hf : support bfloat16 conversion * gguf-py : flake8 fixes * convert-hf : add missing space after comma * convert-hf : get bit-exact same output as ./quantize The quantization version was missing. * convert-hf : don't round bf16 NANs * convert-hf : save some memory with np.int16 intermediate bf16 weights * convert-hf : more closely match llama.cpp with which weights to keep in f32 * convert-hf : add --outtype auto-f16 A reason for this to exist is for model quantizers who want an initial GGUF with the most fidelity to the original model while still using a 16-bit float type instead of 32-bit floats. * convert-hf : remove a semicolon because flake8 doesn't like it It's a reflex from when programming in C/C++, I guess. * convert-hf : support outtype templating in outfile name * convert-hf : rename --outtype auto-f16 to --outtype auto
2024-05-11sync : ggmlGeorgi Gerganov
ggml-ci
2024-05-11feat: implemented sigmoid function (ggml/806)Justina Cho
* added sigmoid function * implemented metal kernel for sigmoid * implemented cuda kernel for sigmoid * added sigmoid unary op and incremented count
2024-05-11build: fix and ignore msvc warnings (ggml/805)Borislav Stanimirov
2024-05-11convert : skip unaccessible HF repos (#7210)CrispStrobe
2024-05-11server : free llama_batch on exit (#7212)Steve Grubb
* [server] Cleanup a memory leak on exit There are a couple memory leaks on exit of the server. This hides others. After cleaning this up, you can see leaks on slots. But that is another patch to be sent after this. * make tab into spaces
2024-05-11llama : lookup word in vocab before doing BPE merges (#7193)Haoxiang Fei
* fix: llama-3 ignore_merges * test: add test for llama-3 bpe ignore_merges * fix: set ignore_merges only for llama-3 * fix: test-tokenizer-1-bpe --ingore-merges detection * fix: copy to fix fallthrough * fix: change ignore_merges to bool * fix: add ignore merges tests to cmake * llama : alternative merge ignore logic --------- Co-authored-by: Haoxiang Fei <feihaoxiang@idea.edu.cn> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-05-11server: fix reported top tokens for temperature 0 (#7203)Johannes Gäßler
2024-05-11llama : add Jina Embeddings architecture (#6826)Joan Fontanals
* feat: first things to do * feat: create tensors for Jina architecture * fix: use other tensors * feat: embedding gets results * fix: fix usage of ALIBI * fix: clean prints * fix: do some cleanup unused vars * fix: revert changes to Makefile and CMakeLists * fix: revert some changes * fix: fix small detail * fix: fix convert formatting * fix: fix linting and editor * feat: set proper vocab settings * fix: JinaBertForMaskedLM registration * feat: support q_normalization and k_normalization in Jina arch * feat: handle gpt2 tokenizer with Jina architecture * feat: example comments in embedding * feat: rename Jina Bert to Jina Bert V2 * fix: add some changes as per review * feat: proper KQ_pos for Jina embeddings * feat: add capacity to load models ES and DE for Spanish * llama : fix pre-tokenizers * ggml : full ALiBi support * ggml : update ggml_soft_max_ext() CUDA, SYCL * ggml : ggml_flash_attn_ext() support ALiBi (CPU) * ggml : ggml_flash_attn_ext() support ALiBi (Metal) * ggml : fix warning * ggml : ggml_flash_attn_ext() support ALiBi (CUDA) ggml-ci * minor : clean-up * embedding : add warning about missing SEP --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-05-11ggml : full ALiBi support (#7192)Georgi Gerganov
* ggml : full ALiBi support * ggml : update ggml_soft_max_ext() CUDA, SYCL * ggml : ggml_flash_attn_ext() support ALiBi (CPU) * ggml : ggml_flash_attn_ext() support ALiBi (Metal) * ggml : fix warning * ggml : ggml_flash_attn_ext() support ALiBi (CUDA) ggml-ci * ggml : fix assert message * vulkan : add dev notes * ggml : require mask when using ALiBi ggml-ci * convert : fix convert for refact models
2024-05-10llama-bench : add pp+tg test type (#7199)slaren
2024-05-10metal : fix flash attention kernel requirements (#7169)Georgi Gerganov
* metal : fix flash attention kernel requirements ggml-ci * metal : fix ggml_metal_supports_op ggml-ci
2024-05-10convert : print "ignore_merges" fieldGeorgi Gerganov
2024-05-10llama : use n_vocab to differentiate between mistral 7B and llama3 8B (#7200)slaren
2024-05-10Fix memory bug in grammar parser (#7194)Justine Tunney
The llama.cpp grammar parser had a bug where forgetting to add a closing quotation mark to strings would cause parsing to crash. Anyone running a server on a public endpoint is advised to upgrade. To reproduce this bug ./llamafile -m foo.gguf -p bar --grammar 'root::="' Credit for discovering and reporting this issue goes to Eclypsium Security Researcher Richard Johnson <Richard.johnson@eclypsium.com>.
2024-05-10Main+: optionally allow special tokens from user in interactive mode (#7097)HanishKVC
@hanishkvc added a new `--interactive-specials` flag which would allow for inserting special tokens from user side into the embedding stream.
2024-05-10llava : fix moondream support (#7163)Andrei
* Revert "Revert "llava : add support for moondream vision language model (#6899)"" This reverts commit 9da243b36ac0b9d609adfaaa4c8f1cc8c592f737. * Fix num_positions and embeddings initialization
2024-05-10Minor arithmetic improvement to mmvq wrapper kernel (#7172)Ouadie EL FAROUKI
2024-05-10eval-callback : fix conversion to float (#7184)slaren
2024-05-09Vulkan Bugfixes and Improvements (#7084)0cc4m
* Modify mat mat mul shader for mul_mat_id, modify mat vec mul shaders for single call batch operation * Further work towards MoE, disabled for now * Disable MoE code (not ready yet), fix a number of bugs in shaders and Vulkan code * Add softmax with f16 mask and pos buffer support * Disable mul_mat_id shaders for now * Fix flake8 * Fix validation errors caused by empty buffers on larger batch sizes
2024-05-09readme : add scheduled server workflow status badgeGeorgi Gerganov
2024-05-09readme : add app (#6371)l3utterfly
* added Layla to supported UIs * Update README.md
2024-05-09llama3 custom regex split (#6965)jaime-m-p
* merged the changes from deepseeker models to main branch * Moved regex patterns to unicode.cpp and updated unicode.h * Moved header files * Resolved issues * added and refactored unicode_regex_split and related functions * Updated/merged the deepseek coder pr * Refactored code * Adding unicode regex mappings * Adding unicode regex function * Added needed functionality, testing remains * Fixed issues * Fixed issue with gpt2 regex custom preprocessor * unicode : fix? unicode_wstring_to_utf8 * lint : fix whitespaces * tests : add tokenizer tests for numbers * unicode : remove redundant headers * tests : remove and rename tokenizer test scripts * tests : add sample usage * gguf-py : reader prints warnings on duplicate keys * llama : towards llama3 tokenization support (wip) * unicode : shot in the dark to fix tests on Windows * unicode : first try custom implementations * convert : add "tokenizer.ggml.pre" GGUF KV (wip) * llama : use new pre-tokenizer type * convert : fix pre-tokenizer type writing * lint : fix * make : add test-tokenizer-0-llama-v3 * wip * models : add llama v3 vocab file * llama : adapt punctuation regex + add llama 3 regex * minor * unicode : set bomb * unicode : set bomb * unicode : always use std::wregex * unicode : support \p{N}, \p{L} and \p{P} natively * unicode : try fix windows * unicode : category support via std::regex * unicode : clean-up * unicode : simplify * llama3 custom regex split * convert : add convert-hf-to-gguf-update.py ggml-ci * lint : update * convert : add falcon ggml-ci * unicode : normalize signatures * lint : fix * lint : fix * convert : remove unused functions * convert : add comments * convert : exercise contractions ggml-ci * Using char32_t for codepoints * lint : fix * already exists unicode_tolower() * Typing * Restore BOM * cmake : refactor test targets * tests : refactor vocab tests ggml-ci * tests : add more vocabs and tests ggml-ci * unicode : cleanup * scripts : ignore new update script in check-requirements.sh * Fix merge * models : add phi-3, mpt, gpt-2, starcoder * tests : disable obsolete ggml-ci * tests : use faster bpe test ggml-ci * llama : more prominent warning for old BPE models * tests : disable test-tokenizer-1-bpe due to slowness ggml-ci * Move unused variable value * GPT2 custom regex split * Add alternative regex for custom aplit llama3 Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Style * Add bruteforce random tests for token encoding * wip: fixing unicode codepoint ranges * Fix merge * Unicode tables: separator, lowercase, uppercase and whitespace * llama3 custom regex split: fix \s * Restore BOM * Style * wip: generate NDF table * Ignore special tokens for testing * Clean gen-unicode-data.py * Refactor random tokenizer test * lint : fix * tests : add fail test for llama-bpe --------- Co-authored-by: Jaggzh <jaggz.h@gmail.com> Co-authored-by: Kazim Abrar Mahi <kazimabrarmahi135@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> Co-authored-by: jaime-m-p <>
2024-05-09CUDA: generalize FP16 fattn vec kernel (#7061)Johannes Gäßler
* CUDA: generalize FP16 fattn vec kernel * disable unsupported head sizes for AMD in test * try AMD fix * fix batch size 2-8 * partially revert changes