summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-04-26server: stop generation at `n_ctx_train` if `n_predict` is not set (#6638)Pierrick Hymbert
* server: cap n_predict if not set to n_ctx_train * server: fix infinite loop * server: infinite loop, move in process_token server: infinite loop: set stop limit to true * minor: spaces * minor: spaces * server: include prompt tokens in the EOS limit
2024-04-26ci: server: fix python installation (#6922)Pierrick Hymbert
2024-04-26Merge pull request from GHSA-p5mv-gjc5-mwqvGeorgi Gerganov
* always use calloc clamp n_kv on failure to read a kv * ggml : alternative ctx->header.n_kv update --------- Co-authored-by: slaren <slarengh@gmail.com>
2024-04-26ci: server: fix python installation (#6918)Pierrick Hymbert
2024-04-26ci: fix concurrency for pull_request_target (#6917)Pierrick Hymbert
2024-04-26bench: server add stop word for PHI-2 (#6916)Pierrick Hymbert
2024-04-25llava : add support for moondream vision language model (#6899)vik
* add support for moondream vision language model This required making the following changes to the CLIP model: 1. Support for patch embedding bias. 2. Make class embedding and pre-layernorm optional. 3. Add support for post-layernorm. * Update examples/llava/clip.cpp --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-25cmake : restore LLAMA_LLAMAFILE_DEFAULTGeorgi Gerganov
2024-04-25cmake : remove obsolete ANDROID checkGeorgi Gerganov
2024-04-25llama : synchronize before get/set session data (#6911)slaren
2024-04-25ci : tmp disable slow testsGeorgi Gerganov
2024-04-25readme : update model list (#6908)BarfingLemurs
* Update README.md * missing space * llama3 !
2024-04-25llama : check that all the tensor data is in the model file (#6885)slaren
* llama : check that all the tensor data is in the model file * also check for unsigned overflow
2024-04-25ggml : fix redefinition of vaddvq_f32 for 32-bit ARM (#6906)Georgi Gerganov
2024-04-25clip : rename lerp function to avoid conflict (#6894)Daniel Bevenius
This commit renamesthe lerp (linear interpolation) function in clip.cpp to avoid a conflict with the lerp function in the <cmath> standard C++ library when using c++20. The motivation for this change is to enable projects that use c++20 to be able to compile clip.cpp without having to resort to patching it. The lerp function was added to cmath in version C++20 (202002L) and is why this is not causing any issue at the moment as C++11/C++17 is currently used by llama.cpp. I realize that llama.cpp uses either C++11 (or C++17 in the case for SYCL) but wanted to ask if this would be an acceptable change just the same. Refs: https://en.cppreference.com/w/cpp/numeric/lerp Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-04-25ggml : fix MIN / MAX macros (#6904)Georgi Gerganov
ggml-ci
2024-04-25tests : minor bash stuff (#6902)Georgi Gerganov
* tests : minor bash stuff ggml-ci * llama : fix build ggml-ci * tests : fix CUR_DIR -> ROOT_DIR ggml-ci * tests : fix fname ggml-ci
2024-04-25quantize : add '--keep-split' to quantize model into shards (#6688)jiez
* Implement '--keep-split' to quantize model into several shards * Add test script * Update examples/quantize/quantize.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Split model correctly even if tensor id is out-of-order * Update llama_model_quantize_params * Fix preci failures --------- Co-authored-by: z5269887 <z5269887@unsw.edu.au> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-24README: add graphic for matrix multiplication (#6881)Johannes Gäßler
2024-04-24llama : add llama_get_pooling_type function (#6862)Douglas Hanley
* add llama_get_pooling_type function * fix argument name, move with ctx funcs
2024-04-24server : do not apply Markdown formatting in code sections (#6850)mgroeber9110
2024-04-24common : revert showing control tokens by default for server (#6860)Kyle Mistele
* fix: revert showing control tokens by default * feat: revert changes to default behavior of llama_token_to_piece; provide overridden declaration to receive "bool special" param to toggle showing control tokens * feat: use the overridden declaration of llama_token_to_piece from common/common.cpp to specify "false" so that control tokens are not shown in chat completion responses" * common : simplify --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-24Server: fix seed for multiple slots (#6835)Johannes Gäßler
* Server: add tests for consistent results * sampling: separate rng per sampling context
2024-04-24ggml : move 32-bit arm compat in ggml-impl.h (#6865)Georgi Gerganov
ggml-ci
2024-04-24llama : add phi 3 chat template (#6857)Tristan Druyen
* Add phi 3 chat template & tests * test : fix chat template result --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-24convert : add support of codeqwen due to tokenizer (#6707)Junyang Lin
* add support of codeqwen due to tokenizer * override load_hparams * fix typo * fix load_params * convert : fix whitespace --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-24llama : add phi3 support (#6852)liuwei-git
* add explicit phi3 support * add explicit phi3 support * remove unused code * convert : add BOS token * llama : match EOT token <|end|> * llama : minor / style * llama : tabs -> spaces * convert : fix lint checks --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-23[SYCL] Windows default build instructions without -DLLAMA_SYCL_F16 flag ↵Anas Ahouzi
activated (#6767) * Fix FP32/FP16 build instructions * Fix typo * Recommended build instruction Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com> * Recommended build instruction Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com> * Recommended build instruction Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com> * Add comments in Intel GPU linux --------- Co-authored-by: Anas Ahouzi <112881240+aahouzi-intel@users.noreply.github.com> Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>
2024-04-22llamafile : improve sgemm.cpp (#6796)Justine Tunney
* llamafile : improve sgemm.cpp - Re-enable by default - Fix issue described in #6716 - Make code more abstract, elegant, and maintainable - Faster handling of weirdly shaped `m` an `n` edge cases * Address review comments * Help clang produce fma instructions * Address review comments
2024-04-22ggml : fix calloc argument ordering. (#6820)Dave Airlie
Latest gcc complains here: /home/airlied/devel/llama.cpp/ggml-alloc.c: In function ‘ggml_gallocr_new_n’: /home/airlied/devel/llama.cpp/ggml-alloc.c:374:59: warning: ‘calloc’ sizes specified with ‘sizeof’ in the earlier argument and not in the later argument [-Wcalloc-transposed-args] 374 | ggml_gallocr_t galloc = (ggml_gallocr_t)calloc(sizeof(struct ggml_gallocr), 1); | ^~~~~~ /home/airlied/devel/llama.cpp/ggml-alloc.c:374:59: note: earlier argument should specify number of elements, later size of each element and a bunch more. calloc is specified to take nmemb first then size, so realign the code. In a couple of places there was a * x, 1 so I fixed those to use calloc properly.
2024-04-22llama : fix typo in <|im_end|> token text (#6745)Georgi Gerganov
2024-04-22ci: fix job are cancelling each other (#6781)Pierrick Hymbert
2024-04-22flake.lock: Updategithub-actions[bot]
Flake lock file updates: • Updated input 'nixpkgs': 'github:NixOS/nixpkgs/1042fd8b148a9105f3c0aca3a6177fd1d9360ba5?narHash=sha256-3sbWO1mbpWsLepZGbWaMovSO7ndZeFqDSdX0hZ9nVyw%3D' (2024-04-10) → 'github:NixOS/nixpkgs/5c24cf2f0a12ad855f444c30b2421d044120c66f?narHash=sha256-XtTSSIB2DA6tOv%2Bl0FhvfDMiyCmhoRbNB%2B0SeInZkbk%3D' (2024-04-19)
2024-04-21`build`: generate hex dump of server assets during build (#6661)Olivier Chafik
* `build`: generate hex dumps of server assets on the fly * build: workaround lack of -n on gnu xxd * build: don't use xxd in cmake * build: don't call xxd from build.zig * build: more idiomatic hexing * build: don't use xxd in Makefile (od hackery instead) * build: avoid exceeding max cmd line limit in makefile hex dump * build: hex dump assets at cmake build time (not config time)
2024-04-21llama : add option to render special/control tokens (#6807)Georgi Gerganov
* make : fix common dep on llama.h * llama : add option to render special tokens * readme : add API change notice ggml-ci * swift : fix build
2024-04-21ggml : fix ggml_backend_cpu_supports_op() for CPY (#0)Georgi Gerganov
2024-04-21llama : add llama-3 chat template (#6751)Wouter
* Added llama-3 chat template * Update llama.cpp Co-authored-by: Samuel Tallet <36248671+SamuelTallet@users.noreply.github.com> * Update llama.cpp Co-authored-by: Samuel Tallet <36248671+SamuelTallet@users.noreply.github.com> * Update tests/test-chat-template.cpp Co-authored-by: Samuel Tallet <36248671+SamuelTallet@users.noreply.github.com> * Added EOS stop sequence according to https://github.com/ggerganov/llama.cpp/pull/6751#issuecomment-2065602862 * Removed adding of BOS token before first message * Removed bos token from expected output from llama-3 * Update tests/test-chat-template.cpp Co-authored-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com> * Update tests/test-chat-template.cpp Co-authored-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com> * Added <|end_of_text|> as another stop token * Reverted last change of adding the end_of_text stop word for llama 3 --------- Co-authored-by: Wouter Tichelaar <tichelaarw@spar.net> Co-authored-by: Samuel Tallet <36248671+SamuelTallet@users.noreply.github.com> Co-authored-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-21gguf-py : add IQ1_M to GGML_QUANT_SIZES (#6761)pmysl
2024-04-21doc : add link to falcon (#6789)Jan Boon
2024-04-21readme : add Fedora instructions (#6783)Mohammadreza Hendiani
* added fedora to list of distros that may need the package (the packages have the same name on Fedora) * how to add clblast that is avalible in the fedora repos
2024-04-21llava : use logger in llava-cli (#6797)Justine Tunney
This change removes printf() logging so llava-cli is shell scriptable.
2024-04-21llama : support Llama 3 HF conversion (#6745)Pedro Cuenca
* Support Llama 3 conversion The tokenizer is BPE. * style * Accept suggestion Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com> * llama : add llama_token_is_eog() ggml-ci * llama : auto-detect more EOT tokens when missing in KV data * convert : replacing EOS token is a hack * llama : fix codegemma EOT token + add TODOs * llama : fix model type string for 8B model --------- Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-20doc : server tests require llama to be built with curl enabled (#6788)Jan Boon
2024-04-20common : try to fix Android CI (#6780)Georgi Gerganov
* common : disable get_math_cpu_count() until Android CI gets fixed * common : another try
2024-04-19ci: add ubuntu latest release and fix missing build number (mac & ubuntu) ↵loonerin
(#6748)
2024-04-19server: static: upstream upgrade (#6765)Pierrick Hymbert
2024-04-19Implement the OLMo architecture (#6741)nopperl
* implement olmo architecture * remove unused variable * remove unused moe branch * remove check for weight * remove superfluous moe, bias and rope tensors * clarified comment * fix clamp_kqv setting * remove obsolete parameter name filter
2024-04-19train : add general name (#6752)Austin
* llama : make general.name optional * train: Add 'general.name' to model metadata Signed-off-by: teleprint-me <77757836+teleprint-me@users.noreply.github.com> --------- Signed-off-by: teleprint-me <77757836+teleprint-me@users.noreply.github.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-19fix wrong parameter in cmd in readme-sycl.md (#6755)Neo Zhang
Co-authored-by: jianyuzh <jianyu.zhang@intel.com>
2024-04-18ggml : group all experts in a single ggml_mul_mat_id (#6505)slaren
* ggml : group all experts in a single ggml_mul_mat_id cuda : improve mmid row copy * cuda : fix bin bcast with non-cont src0 * test-backend-ops : only run all mul mat tests for base types * llama : disable moe offloading with SYCL --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>