summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-04-29ggml : fix __MSC_VER -> _MSC_VER (#6977)Georgi Gerganov
ggml-ci
2024-04-29llava-cli : multiple images (#6969)cpumaxx
Co-authored-by: root <root@nenya.lothlorien.ca>
2024-04-29readme : update hot topicsGeorgi Gerganov
2024-04-29llama : fix BPE pre-tokenization (#6920)Georgi Gerganov
* merged the changes from deepseeker models to main branch * Moved regex patterns to unicode.cpp and updated unicode.h * Moved header files * Resolved issues * added and refactored unicode_regex_split and related functions * Updated/merged the deepseek coder pr * Refactored code * Adding unicode regex mappings * Adding unicode regex function * Added needed functionality, testing remains * Fixed issues * Fixed issue with gpt2 regex custom preprocessor * unicode : fix? unicode_wstring_to_utf8 * lint : fix whitespaces * tests : add tokenizer tests for numbers * unicode : remove redundant headers * tests : remove and rename tokenizer test scripts * tests : add sample usage * gguf-py : reader prints warnings on duplicate keys * llama : towards llama3 tokenization support (wip) * unicode : shot in the dark to fix tests on Windows * unicode : first try custom implementations * convert : add "tokenizer.ggml.pre" GGUF KV (wip) * llama : use new pre-tokenizer type * convert : fix pre-tokenizer type writing * lint : fix * make : add test-tokenizer-0-llama-v3 * wip * models : add llama v3 vocab file * llama : adapt punctuation regex + add llama 3 regex * minor * unicode : set bomb * unicode : set bomb * unicode : always use std::wregex * unicode : support \p{N}, \p{L} and \p{P} natively * unicode : try fix windows * unicode : category support via std::regex * unicode : clean-up * unicode : simplify * convert : add convert-hf-to-gguf-update.py ggml-ci * lint : update * convert : add falcon ggml-ci * unicode : normalize signatures * lint : fix * lint : fix * convert : remove unused functions * convert : add comments * convert : exercise contractions ggml-ci * lint : fix * cmake : refactor test targets * tests : refactor vocab tests ggml-ci * tests : add more vocabs and tests ggml-ci * unicode : cleanup * scripts : ignore new update script in check-requirements.sh * models : add phi-3, mpt, gpt-2, starcoder * tests : disable obsolete ggml-ci * tests : use faster bpe test ggml-ci * llama : more prominent warning for old BPE models * tests : disable test-tokenizer-1-bpe due to slowness ggml-ci --------- Co-authored-by: Jaggzh <jaggz.h@gmail.com> Co-authored-by: Kazim Abrar Mahi <kazimabrarmahi135@gmail.com>
2024-04-29sampling : use std::random_device{}() for default random seed (#6962)David Renshaw
2024-04-29convert : fix conversion of some BERT embedding models (#6937)Christian Zhou-Zheng
2024-04-29make : change GNU make default CXX from g++ to c++ (#6966)Przemysław Pawełczyk
2024-04-29ci : add building in MSYS2 environments (Windows) (#6967)Przemysław Pawełczyk
2024-04-29llama : fix typo LAMMAFILE -> LLAMAFILE (#6974)Johannes Gäßler
2024-04-29Fix more int overflow during quant (PPL/CUDA). (#6563)DAN™
* Fix more int overflow during quant. * Fix some more int overflow in softmax. * Revert back to int64_t.
2024-04-28gguf : enforce that tensor names are unique (#6905)Xuan Son Nguyen
* not allow adding duplicated tensor name * no duplicated tensor while reading gguf * typo * throw exception inside llama_model_loader Co-authored-by: slaren <slarengh@gmail.com> --------- Co-authored-by: slaren <slarengh@gmail.com>
2024-04-28add device version in device list (#6959)Neo Zhang
Co-authored-by: arthw <>
2024-04-28flake.lock: Updategithub-actions[bot]
Flake lock file updates: • Updated input 'nixpkgs': 'github:NixOS/nixpkgs/5c24cf2f0a12ad855f444c30b2421d044120c66f?narHash=sha256-XtTSSIB2DA6tOv%2Bl0FhvfDMiyCmhoRbNB%2B0SeInZkbk%3D' (2024-04-19) → 'github:NixOS/nixpkgs/7bb2ccd8cdc44c91edba16c48d2c8f331fb3d856?narHash=sha256-Drmja/f5MRHZCskS6mvzFqxEaZMeciScCTFxWVLqWEY%3D' (2024-04-25)
2024-04-27Replace "alternative" boolean operator in conditional compilation directive ↵mgroeber9110
(#6949)
2024-04-27ci: server: tests python env on github container ubuntu latest / fix ↵Pierrick Hymbert
n_predict (#6935) * ci: server: fix python env * ci: server: fix server tests after #6638 * ci: server: fix windows is not building PR branch
2024-04-26Reset schedule earlier to allow overlap with ggml graph computation on ↵agray3
device (#6933) * Reset schedule earlier to allow overlap with graph computation on device
2024-04-26quantize: add imatrix and dataset metadata in GGUF (#6658)Pierrick Hymbert
* imatrix: save the dataset file used in the output file * llama: support kv overrides type string string * common: factorize KV Overrides parsing between common and server * quantize: add imatrix n entries and dataset KV metadata quantize: factorize KV Overrides parsing between common #6656 * llama: remove kv override str_value initialization as it does not compile on some toolchain * quantize: add imatrix m_last_call as `quantize.imatrix.chunks_count` * quantize: add imatrix filename in KV * llama: add llama_model_kv_override_free * common: add llama_model_kv_override_free common: free kv override if used after model loading * llama: finally move the string KV override value to the stack * llama : minor * no need to add a NUL to the std::vector, std::string can be initialized from a pair of iterators. Co-authored-by: slaren <slarengh@gmail.com> * kv override: ensure string termination --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> Co-authored-by: slaren <slarengh@gmail.com>
2024-04-26add basic tensor data validation function (#6884)slaren
* add basic tensor data validation function * add --check-tensors command line argument tensor validation is disabled by default and can be enabled by adding `--check-tensors` to the command line arguments. quantize always validates tensors.
2024-04-26gguf : fix mismatch between alloc and free functions (#6929)slaren
2024-04-26llamafile : use 64-bit integers in sgemm (#6928)Justine Tunney
2024-04-26ci: server: fix python installation (#6925)Pierrick Hymbert
2024-04-26server: stop generation at `n_ctx_train` if `n_predict` is not set (#6638)Pierrick Hymbert
* server: cap n_predict if not set to n_ctx_train * server: fix infinite loop * server: infinite loop, move in process_token server: infinite loop: set stop limit to true * minor: spaces * minor: spaces * server: include prompt tokens in the EOS limit
2024-04-26ci: server: fix python installation (#6922)Pierrick Hymbert
2024-04-26Merge pull request from GHSA-p5mv-gjc5-mwqvGeorgi Gerganov
* always use calloc clamp n_kv on failure to read a kv * ggml : alternative ctx->header.n_kv update --------- Co-authored-by: slaren <slarengh@gmail.com>
2024-04-26ci: server: fix python installation (#6918)Pierrick Hymbert
2024-04-26ci: fix concurrency for pull_request_target (#6917)Pierrick Hymbert
2024-04-26bench: server add stop word for PHI-2 (#6916)Pierrick Hymbert
2024-04-25llava : add support for moondream vision language model (#6899)vik
* add support for moondream vision language model This required making the following changes to the CLIP model: 1. Support for patch embedding bias. 2. Make class embedding and pre-layernorm optional. 3. Add support for post-layernorm. * Update examples/llava/clip.cpp --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-25cmake : restore LLAMA_LLAMAFILE_DEFAULTGeorgi Gerganov
2024-04-25cmake : remove obsolete ANDROID checkGeorgi Gerganov
2024-04-25llama : synchronize before get/set session data (#6911)slaren
2024-04-25ci : tmp disable slow testsGeorgi Gerganov
2024-04-25readme : update model list (#6908)BarfingLemurs
* Update README.md * missing space * llama3 !
2024-04-25llama : check that all the tensor data is in the model file (#6885)slaren
* llama : check that all the tensor data is in the model file * also check for unsigned overflow
2024-04-25ggml : fix redefinition of vaddvq_f32 for 32-bit ARM (#6906)Georgi Gerganov
2024-04-25clip : rename lerp function to avoid conflict (#6894)Daniel Bevenius
This commit renamesthe lerp (linear interpolation) function in clip.cpp to avoid a conflict with the lerp function in the <cmath> standard C++ library when using c++20. The motivation for this change is to enable projects that use c++20 to be able to compile clip.cpp without having to resort to patching it. The lerp function was added to cmath in version C++20 (202002L) and is why this is not causing any issue at the moment as C++11/C++17 is currently used by llama.cpp. I realize that llama.cpp uses either C++11 (or C++17 in the case for SYCL) but wanted to ask if this would be an acceptable change just the same. Refs: https://en.cppreference.com/w/cpp/numeric/lerp Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-04-25ggml : fix MIN / MAX macros (#6904)Georgi Gerganov
ggml-ci
2024-04-25tests : minor bash stuff (#6902)Georgi Gerganov
* tests : minor bash stuff ggml-ci * llama : fix build ggml-ci * tests : fix CUR_DIR -> ROOT_DIR ggml-ci * tests : fix fname ggml-ci
2024-04-25quantize : add '--keep-split' to quantize model into shards (#6688)jiez
* Implement '--keep-split' to quantize model into several shards * Add test script * Update examples/quantize/quantize.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Split model correctly even if tensor id is out-of-order * Update llama_model_quantize_params * Fix preci failures --------- Co-authored-by: z5269887 <z5269887@unsw.edu.au> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-24README: add graphic for matrix multiplication (#6881)Johannes Gäßler
2024-04-24llama : add llama_get_pooling_type function (#6862)Douglas Hanley
* add llama_get_pooling_type function * fix argument name, move with ctx funcs
2024-04-24server : do not apply Markdown formatting in code sections (#6850)mgroeber9110
2024-04-24common : revert showing control tokens by default for server (#6860)Kyle Mistele
* fix: revert showing control tokens by default * feat: revert changes to default behavior of llama_token_to_piece; provide overridden declaration to receive "bool special" param to toggle showing control tokens * feat: use the overridden declaration of llama_token_to_piece from common/common.cpp to specify "false" so that control tokens are not shown in chat completion responses" * common : simplify --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-24Server: fix seed for multiple slots (#6835)Johannes Gäßler
* Server: add tests for consistent results * sampling: separate rng per sampling context
2024-04-24ggml : move 32-bit arm compat in ggml-impl.h (#6865)Georgi Gerganov
ggml-ci
2024-04-24llama : add phi 3 chat template (#6857)Tristan Druyen
* Add phi 3 chat template & tests * test : fix chat template result --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-24convert : add support of codeqwen due to tokenizer (#6707)Junyang Lin
* add support of codeqwen due to tokenizer * override load_hparams * fix typo * fix load_params * convert : fix whitespace --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-24llama : add phi3 support (#6852)liuwei-git
* add explicit phi3 support * add explicit phi3 support * remove unused code * convert : add BOS token * llama : match EOT token <|end|> * llama : minor / style * llama : tabs -> spaces * convert : fix lint checks --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-23[SYCL] Windows default build instructions without -DLLAMA_SYCL_F16 flag ↵Anas Ahouzi
activated (#6767) * Fix FP32/FP16 build instructions * Fix typo * Recommended build instruction Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com> * Recommended build instruction Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com> * Recommended build instruction Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com> * Add comments in Intel GPU linux --------- Co-authored-by: Anas Ahouzi <112881240+aahouzi-intel@users.noreply.github.com> Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>
2024-04-22llamafile : improve sgemm.cpp (#6796)Justine Tunney
* llamafile : improve sgemm.cpp - Re-enable by default - Fix issue described in #6716 - Make code more abstract, elegant, and maintainable - Faster handling of weirdly shaped `m` an `n` edge cases * Address review comments * Help clang produce fma instructions * Address review comments