summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-04-25tests : minor bash stuff (#6902)Georgi Gerganov
* tests : minor bash stuff ggml-ci * llama : fix build ggml-ci * tests : fix CUR_DIR -> ROOT_DIR ggml-ci * tests : fix fname ggml-ci
2024-04-25quantize : add '--keep-split' to quantize model into shards (#6688)jiez
* Implement '--keep-split' to quantize model into several shards * Add test script * Update examples/quantize/quantize.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Split model correctly even if tensor id is out-of-order * Update llama_model_quantize_params * Fix preci failures --------- Co-authored-by: z5269887 <z5269887@unsw.edu.au> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-24README: add graphic for matrix multiplication (#6881)Johannes Gäßler
2024-04-24llama : add llama_get_pooling_type function (#6862)Douglas Hanley
* add llama_get_pooling_type function * fix argument name, move with ctx funcs
2024-04-24server : do not apply Markdown formatting in code sections (#6850)mgroeber9110
2024-04-24common : revert showing control tokens by default for server (#6860)Kyle Mistele
* fix: revert showing control tokens by default * feat: revert changes to default behavior of llama_token_to_piece; provide overridden declaration to receive "bool special" param to toggle showing control tokens * feat: use the overridden declaration of llama_token_to_piece from common/common.cpp to specify "false" so that control tokens are not shown in chat completion responses" * common : simplify --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-24Server: fix seed for multiple slots (#6835)Johannes Gäßler
* Server: add tests for consistent results * sampling: separate rng per sampling context
2024-04-24ggml : move 32-bit arm compat in ggml-impl.h (#6865)Georgi Gerganov
ggml-ci
2024-04-24llama : add phi 3 chat template (#6857)Tristan Druyen
* Add phi 3 chat template & tests * test : fix chat template result --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-24convert : add support of codeqwen due to tokenizer (#6707)Junyang Lin
* add support of codeqwen due to tokenizer * override load_hparams * fix typo * fix load_params * convert : fix whitespace --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-24llama : add phi3 support (#6852)liuwei-git
* add explicit phi3 support * add explicit phi3 support * remove unused code * convert : add BOS token * llama : match EOT token <|end|> * llama : minor / style * llama : tabs -> spaces * convert : fix lint checks --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-23[SYCL] Windows default build instructions without -DLLAMA_SYCL_F16 flag ↵Anas Ahouzi
activated (#6767) * Fix FP32/FP16 build instructions * Fix typo * Recommended build instruction Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com> * Recommended build instruction Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com> * Recommended build instruction Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com> * Add comments in Intel GPU linux --------- Co-authored-by: Anas Ahouzi <112881240+aahouzi-intel@users.noreply.github.com> Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com>
2024-04-22llamafile : improve sgemm.cpp (#6796)Justine Tunney
* llamafile : improve sgemm.cpp - Re-enable by default - Fix issue described in #6716 - Make code more abstract, elegant, and maintainable - Faster handling of weirdly shaped `m` an `n` edge cases * Address review comments * Help clang produce fma instructions * Address review comments
2024-04-22ggml : fix calloc argument ordering. (#6820)Dave Airlie
Latest gcc complains here: /home/airlied/devel/llama.cpp/ggml-alloc.c: In function ‘ggml_gallocr_new_n’: /home/airlied/devel/llama.cpp/ggml-alloc.c:374:59: warning: ‘calloc’ sizes specified with ‘sizeof’ in the earlier argument and not in the later argument [-Wcalloc-transposed-args] 374 | ggml_gallocr_t galloc = (ggml_gallocr_t)calloc(sizeof(struct ggml_gallocr), 1); | ^~~~~~ /home/airlied/devel/llama.cpp/ggml-alloc.c:374:59: note: earlier argument should specify number of elements, later size of each element and a bunch more. calloc is specified to take nmemb first then size, so realign the code. In a couple of places there was a * x, 1 so I fixed those to use calloc properly.
2024-04-22llama : fix typo in <|im_end|> token text (#6745)Georgi Gerganov
2024-04-22ci: fix job are cancelling each other (#6781)Pierrick Hymbert
2024-04-22flake.lock: Updategithub-actions[bot]
Flake lock file updates: • Updated input 'nixpkgs': 'github:NixOS/nixpkgs/1042fd8b148a9105f3c0aca3a6177fd1d9360ba5?narHash=sha256-3sbWO1mbpWsLepZGbWaMovSO7ndZeFqDSdX0hZ9nVyw%3D' (2024-04-10) → 'github:NixOS/nixpkgs/5c24cf2f0a12ad855f444c30b2421d044120c66f?narHash=sha256-XtTSSIB2DA6tOv%2Bl0FhvfDMiyCmhoRbNB%2B0SeInZkbk%3D' (2024-04-19)
2024-04-21`build`: generate hex dump of server assets during build (#6661)Olivier Chafik
* `build`: generate hex dumps of server assets on the fly * build: workaround lack of -n on gnu xxd * build: don't use xxd in cmake * build: don't call xxd from build.zig * build: more idiomatic hexing * build: don't use xxd in Makefile (od hackery instead) * build: avoid exceeding max cmd line limit in makefile hex dump * build: hex dump assets at cmake build time (not config time)
2024-04-21llama : add option to render special/control tokens (#6807)Georgi Gerganov
* make : fix common dep on llama.h * llama : add option to render special tokens * readme : add API change notice ggml-ci * swift : fix build
2024-04-21ggml : fix ggml_backend_cpu_supports_op() for CPY (#0)Georgi Gerganov
2024-04-21llama : add llama-3 chat template (#6751)Wouter
* Added llama-3 chat template * Update llama.cpp Co-authored-by: Samuel Tallet <36248671+SamuelTallet@users.noreply.github.com> * Update llama.cpp Co-authored-by: Samuel Tallet <36248671+SamuelTallet@users.noreply.github.com> * Update tests/test-chat-template.cpp Co-authored-by: Samuel Tallet <36248671+SamuelTallet@users.noreply.github.com> * Added EOS stop sequence according to https://github.com/ggerganov/llama.cpp/pull/6751#issuecomment-2065602862 * Removed adding of BOS token before first message * Removed bos token from expected output from llama-3 * Update tests/test-chat-template.cpp Co-authored-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com> * Update tests/test-chat-template.cpp Co-authored-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com> * Added <|end_of_text|> as another stop token * Reverted last change of adding the end_of_text stop word for llama 3 --------- Co-authored-by: Wouter Tichelaar <tichelaarw@spar.net> Co-authored-by: Samuel Tallet <36248671+SamuelTallet@users.noreply.github.com> Co-authored-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-21gguf-py : add IQ1_M to GGML_QUANT_SIZES (#6761)pmysl
2024-04-21doc : add link to falcon (#6789)Jan Boon
2024-04-21readme : add Fedora instructions (#6783)Mohammadreza Hendiani
* added fedora to list of distros that may need the package (the packages have the same name on Fedora) * how to add clblast that is avalible in the fedora repos
2024-04-21llava : use logger in llava-cli (#6797)Justine Tunney
This change removes printf() logging so llava-cli is shell scriptable.
2024-04-21llama : support Llama 3 HF conversion (#6745)Pedro Cuenca
* Support Llama 3 conversion The tokenizer is BPE. * style * Accept suggestion Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com> * llama : add llama_token_is_eog() ggml-ci * llama : auto-detect more EOT tokens when missing in KV data * convert : replacing EOS token is a hack * llama : fix codegemma EOT token + add TODOs * llama : fix model type string for 8B model --------- Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-20doc : server tests require llama to be built with curl enabled (#6788)Jan Boon
2024-04-20common : try to fix Android CI (#6780)Georgi Gerganov
* common : disable get_math_cpu_count() until Android CI gets fixed * common : another try
2024-04-19ci: add ubuntu latest release and fix missing build number (mac & ubuntu) ↵loonerin
(#6748)
2024-04-19server: static: upstream upgrade (#6765)Pierrick Hymbert
2024-04-19Implement the OLMo architecture (#6741)nopperl
* implement olmo architecture * remove unused variable * remove unused moe branch * remove check for weight * remove superfluous moe, bias and rope tensors * clarified comment * fix clamp_kqv setting * remove obsolete parameter name filter
2024-04-19train : add general name (#6752)Austin
* llama : make general.name optional * train: Add 'general.name' to model metadata Signed-off-by: teleprint-me <77757836+teleprint-me@users.noreply.github.com> --------- Signed-off-by: teleprint-me <77757836+teleprint-me@users.noreply.github.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-19fix wrong parameter in cmd in readme-sycl.md (#6755)Neo Zhang
Co-authored-by: jianyuzh <jianyu.zhang@intel.com>
2024-04-18ggml : group all experts in a single ggml_mul_mat_id (#6505)slaren
* ggml : group all experts in a single ggml_mul_mat_id cuda : improve mmid row copy * cuda : fix bin bcast with non-cont src0 * test-backend-ops : only run all mul mat tests for base types * llama : disable moe offloading with SYCL --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-18convert : support models with multiple chat templates (#6588)Sigbjørn Skjæret
* Support converting models with multiple chat templates Adds the following metadata: * tokenizer.chat_templates * tokenizer.chat_template.<name1> * tokenizer.chat_template.<name2> * tokenizer.chat_template.<...> Where `tokenizer.chat_templates` is an array of the template names (except `default`), `default` is added to the regular `tokenizer.chat_template`. * replace filtered characters with underscore * New script to add/modify/remove metadata This scripts creates a copy of a GGUF file and allows you to add/modify/remove metadata in the process. Most importantly this allows you to update chat templates, either as a string or directly from an updated tokenizer_config.json file. * Add files via upload add new script to project/readme * flake--
2024-04-18Qwen2 : assume tied weights if lm_head/output weights is missing (#6738)Ren Xuancheng
2024-04-18llama : fix compatibility with old 2 expert models (#6735)slaren
2024-04-17llamafile : tmp disable + build sgemm.o when needed (#6716)Georgi Gerganov
* build : sgemm.o only when needed ggml-ci * llamafile : tmp disable due to MoE bug ggml-ci
2024-04-17readme : add UI (#6724)Yaroslav
* Update README.md * Update README.md --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-16convert : fix autoawq gemma (#6704)Zheng.Deng
* fix autoawq quantized gemma model convert error using autoawq to quantize gemma model will include a lm_head.weight tensor in model-00001-of-00002.safetensors. it result in this situation that convert-hf-to-gguf.py can't map lm_head.weight. skip loading this tensor could prevent this error. * change code to full string match and print necessary message change code to full string match and print a short message to inform users that lm_head.weight has been skipped. --------- Co-authored-by: Zheng.Deng <32841220+CUGfred@users.noreply.github.com>
2024-04-16llama : make general.name optional (#6709)Georgi Gerganov
2024-04-16ggml : fix llamafile sgemm wdata offsets (#6710)Georgi Gerganov
ggml-ci
2024-04-16ggml : add llamafile sgemm (#6414)Justine Tunney
This change upstreams llamafile's cpu matrix multiplication kernels which improve image and prompt evaluation speed. For starters, Q4_0 and Q8_0 weights should go ~40% faster on CPU. The biggest benefits are with data types like f16 / f32, which process prompts 2x faster thus making them faster than quantized data types for prompt evals. This change also introduces bona fide AVX512 support since tinyBLAS is able to exploit the larger register file. For example, on my CPU llama.cpp llava-cli processes an image prompt at 305 tokens/second, using the Q4_K and Q4_0 types, which has always been faster than if we used f16 LLaVA weights, which at HEAD go 188 tokens/second. With this change, f16 LLaVA performance leap frogs to 464 tokens/second. On Intel Core i9-14900K this change improves F16 prompt perf by 5x. For example, using llama.cpp at HEAD with Mistral 7b f16 to process a 215 token prompt will go 13 tok/sec. This change has fixes making it go 52 tok/sec. It's mostly thanks to my vectorized outer product kernels but also because I added support for correctly counting the number of cores on Alderlake, so the default thread count discounts Intel's new efficiency cores. Only Linux right now can count cores. This work was sponsored by Mozilla who's given permission to change the license of this code from Apache 2.0 to MIT. To read more about what's improved, and how it works, see: https://justine.lol/matmul/
2024-04-16llama : add StableLM2 12B (#6635)Ashish
* StableLM2 12B support for huggingface -> GGUF * StableLM12 tensormapping and constants * StableLM-2-12b model support * fix * Added 12B support * Removed autoformatting; resolved bug where model_arch was not selecting StableLM2 * Formatting * Do QK norm stacking in model conversion step * Converge StableLM and StableLM2 code to simplify graph construction * Fix accidental removal * Removed warnings * Revert formatter * Move QK norm stack to private function so it's easier to read * refactor stablelm graph builder to support 1.6, 3b and 12b more efficiently * Proper check for None type for new_name to avoid crash; formatting; revert change to base class `write_tensors()` * Format * Formatting * format Co-authored-by: compilade <git@compilade.net> * Fix incorrect check for K norm * space after commas; Keep indentation multiple of 4 spaces * Flake8 format * Removed unnecessary conditional branches * Removed unused comment * Fixed incorrect tensor passing * Format --------- Co-authored-by: compilade <git@compilade.net>
2024-04-16llama : add qwen2moe (#6074)Shijie
* support qwen2moe * fix-review * metal : support unary ops for nelements % 4 != 0 * metal : require contiguousness for float4 unary kernels * metal : require contiguousness for float4 unary kernels (cont) * fix-review * names : for brevity "SHARED_EXP" -> "SHEXP" * llama : reuse build_moe_ffn() * llama : add model type name --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-16gritlm : add --outdir option to hf.sh script (#6699)Daniel Bevenius
This commit updates the hf.sh script usage to include the --outdir option and specifies the models directory as the output directory. The motivation for this is to avoid cluttering the root directory with model files. Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-04-16perplexity : require positive --ctx-size arg (#6695)Georgi Gerganov
2024-04-16gguf : add special tokens metadata for FIM/Infill (#6689)Daniel Bevenius
This commit adds special token metadata for Fill-In-the-Middle (FIM)/Infill to the GGUF model. The motivation for this is that currently there is support for CodeLlama but other models exist now like CodeGemma, but the different models use different token ids for the special tokens and this commit allows for supporting multiple models. Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-04-15`main`: add --json-schema / -j flag (#6659)Olivier Chafik
* main: add --json-schema / -j * json: move json-schema-to-grammar to common lib * json: fix zig build
2024-04-15llama : fix restoring the number of outputs from state files (#6687)compilade