summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-02-23server : add KV cache quantization options (#5684)AlpinDale
2024-02-23convert : fix missing ftype for gemma (#5690)Jared Van Bortel
2024-02-22mpt : do not duplicate token_embd.weight on disk (#5670)Jared Van Bortel
2024-02-22gemma : use more bits for the token_embd.weight tensor (#5650)Georgi Gerganov
* gemma : use Q8_0 for the token_embd.weight tensor * llama : quantize token_embd.weight using output type
2024-02-22py : add Gemma conversion from HF models (#5647)Georgi Gerganov
* py : add gemma conversion from HF models * Update convert-hf-to-gguf.py Co-authored-by: Aarni Koskela <akx@iki.fi> * Update convert-hf-to-gguf.py Co-authored-by: Aarni Koskela <akx@iki.fi> * Update convert-hf-to-gguf.py Co-authored-by: Jared Van Bortel <jared@nomic.ai> --------- Co-authored-by: Aarni Koskela <akx@iki.fi> Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-02-22ggml : always define ggml_fp16_t as uint16_t (#5666)Georgi Gerganov
* ggml : always define ggml_fp16_t as uint16_t ggml-ci * ggml : cont ggml-ci * ggml : cont * ggml : cont ggml-ci * ggml : cont ggml-ci * cuda : no longer ggml headers last ggml-ci * ggml : fix q6_K FP16 -> FP32 conversion ggml-ci * ggml : more FP16 -> FP32 conversion fixes ggml-ci
2024-02-22sync : ggmlGeorgi Gerganov
2024-02-22ggml : 32-bit arm compat (whisper/1891)Georgi Gerganov
* ggml : 32-bit arm compat * ggml : add ggml_vqtbl1q_s8 impl * ggml : cont
2024-02-22nix: init singularity and docker images (#5056)Someone
Exposes a few attributes demonstrating how to build [singularity](https://docs.sylabs.io/guides/latest/user-guide/)/[apptainer](https://apptainer.org/) and Docker images re-using llama.cpp's Nix expression. Built locally on `x86_64-linux` with `nix build github:someoneserge/llama.cpp/feat/nix/images#llamaPackages.{docker,docker-min,sif,llama-cpp}` and it's fast and effective.
2024-02-22py : minor fixes (#5668)Georgi Gerganov
2024-02-22Add Gemma chat template (#5665)Xuan Son Nguyen
* add gemma chat template * gemma: only apply system_prompt on non-model message
2024-02-22workflows: nix: hardcode cachix ids, build unconditionally (#5663)Someone
GitHub does not expose environment and repository variables to PRs coming from forks implies that we've been disabling the Nix CI actions for most PRs. The `if:` also didn't make much sense, because we can always pull from cachix, and there's no point (albeit no risk either) in pushing cache for the untrusted code.
2024-02-22minor : fix trailing whitespace (#5638)Georgi Gerganov
2024-02-22readme : update hot topicsGeorgi Gerganov
2024-02-22server : fallback to chatml, add AlphaMonarch chat template (#5628)Xuan Son Nguyen
* server: fallback to chatml * add new chat template * server: add AlphaMonarch to test chat template * server: only check model template if there is no custom tmpl * remove TODO
2024-02-22server : clarify some params in the docs (#5640)Alexey Parfenov
2024-02-22mpt : add optional bias tensors (#5638)Dat Quoc Nguyen
Update for MPT with optional bias parameters: to work with PhoGPT and SEA-LION models that were pre-trained with 'bias'.
2024-02-22llama : fix loading models with shared tok_embd and output (#5651)slaren
ggml-ci
2024-02-22Add docs for llama_chat_apply_template (#5645)Xuan Son Nguyen
* add docs for llama_chat_apply_template * fix typo
2024-02-21llama : fix session save/load with quantized KV (#5649)slaren
2024-02-21gemma : allow offloading the output tensor (#5646)slaren
2024-02-21examples : do not assume BOS when shifting context (#5622)Jared Van Bortel
2024-02-21sync : ggmlGeorgi Gerganov
2024-02-21server: health: fix race condition on slots data using tasks queue (#5634)Pierrick Hymbert
* server: health: fix race condition on slots data using tasks queue * server: health: * include_slots only if slots_endpoint * fix compile warning task.target_id not initialized.
2024-02-21readme : add LocalAI to the availables UI (#5629)Ettore Di Giacinto
2024-02-21sync : ggml (#5633)Georgi Gerganov
* ggml : fix conv_2d batch mode (ggml/737) Co-authored-by: bssrdf <bssrdf@gmail.com> * ggml : compute forward no longer pass src tensors (ggml/729) * sync : ggml ggml-ci --------- Co-authored-by: bssrdf <merlintiger@hotmail.com> Co-authored-by: bssrdf <bssrdf@gmail.com>
2024-02-21readme : update hot topicsGeorgi Gerganov
2024-02-21llava : add --skip-unknown to 1.6 convert.py (#5632)Daniel Bevenius
This commit adds the `--skip-unknown` option to the convert.py script and removes the saving of the updated checkpoints to avoid updating possibly checked out files. The motivation for this change is that this was done for 1.5 in Commit fc0c8d286a533363a9a663510b62af85ffad58b3 ("llava : update surgery script to not remove tensors") and makes the examples more consistent. Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-02-21llama : add `gemma` model (#5631)postmasters
There are couple things in this architecture: 1. Shared input and output embedding parameters. 2. Key length and value length are not derived from `n_embd`. More information about the models can be found at https://ai.google.dev/gemma. GGUFs can be downloaded from https://huggingface.co/google.
2024-02-21[SYCL] conext add name (#5624)Meng, Hengyu
* [SYCL] conext add name * name should start with SYCL*
2024-02-21IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)Kawrakow
* iq4_nl: squash commits for easier rebase * Basics (quantize, dequantize) * CUDA dequantize and dot product * Slightly faster CUDA dot product (120 t/s) * Switch to 6-bit scales * Scalar dot product * AVX2 dot product * ARM_NEON dot product * Works on metal, but still slow * Slightly better Metal dot product * Another small Metal improvement * Metal dot product is getting there * Faster CUDA dot product * Add 1/8 ffn_down layers as Q5_K when no imatrix has been provided * Report the actual bpw * Add _xs mix that is 4.05 bpw for non-MoE models * Remove IQ4_XS for now, slightly adjust kvalues_iq4nl * AVX2 dot product uses Q8_0 instead of Q8_K * Add to test-backend-ops * Minor fix * Also use use Q5_K for attn_output in MoE models * Fixes after merging latest master * Switching to blocks of 32 * AVX2 for blocks of 32 * Scaler dot product for blocks of 32 * ARM_NEON dot product for blocks of 32 * Metal kernels for blocks of 32 * Slightly faster Metal kernels * iq4_nl: Fix after merging with master * iq4_nl: another fix after merging with master * Use IQ4_NL instead of Q4_K when using k-quants is not possible * Fix typo that makes several tests fail * It was the ggml_vdotq thing missed inside the brackets --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-02-20server : support llava 1.6 (#5553)CJ Pais
* server: init working 1.6 * move clip_image to header * remove commented code * remove c++ style from header * remove todo * expose llava_image_embed_make_with_clip_img * fix zig build
2024-02-20make : fix debug build with CUDA (#5616)slaren
2024-02-20llava : add explicit instructions for llava-1.6 (#5611)Daniel Bevenius
This commit contains a suggestion for the README.md in the llava example. The suggestion adds explicit instructions for how to convert a llava-1.6 model and run it using llava-cli. The motivation for this is that having explicit instructions similar to the 1.5 instructions will make it easier for users to try this out. Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-02-20Server: use llama_chat_apply_template (#5593)Xuan Son Nguyen
* server: use llama_chat_apply_template * server: remove trailing space * server: fix format_chat * server: fix help message Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * server: fix formatted_chat --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-20readme : update UI list (#5605)Dane Madsen
* Add maid to ui list * Specify licence
2024-02-20metal : add build system support for embedded metal library (#5604)Haoxiang Fei
* add build support for embedded metal library * Update Makefile --------- Co-authored-by: Haoxiang Fei <feihaoxiang@idea.edu.cn> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-20server : health endpoint configurable failure on no slot (#5594)Pierrick Hymbert
2024-02-20Update ggml_sycl_op_mul_mat_vec_q (#5502)AidanBeltonS
* Update ggml_sycl_op_mul_mat_vec_q * Apply suggestions from code review Co-authored-by: Abhilash Majumder <30946547+abhilash1910@users.noreply.github.com> * revert suggestion on macro * fix bug * Add quant type GGML_TYPE_IQ1_S to unsupported * fix format --------- Co-authored-by: Abhilash Majumder <30946547+abhilash1910@users.noreply.github.com>
2024-02-19nix: now that we can do so, allow MacOS to build Vulkan binariesMathijs de Bruin
Author: Philip Taron <philip.taron@gmail.com> Date: Tue Feb 13 20:28:02 2024 +0000
2024-02-19Enable Vulkan MacOS CI0cc4m
2024-02-19Refactor validation and enumeration platform checks into functions to clean ↵0cc4m
up ggml_vk_instance_init()
2024-02-19Add check for VK_KHR_portability_enumeration for MoltenVK support0cc4m
2024-02-19Add preprocessor checks for Apple devices.Mathijs de Bruin
Based on work by @rbourgeat in https://github.com/ggerganov/llama.cpp/pull/5322/files
2024-02-19Resolve ErrorIncompatibleDriver with Vulkan on MacOS.Mathijs de Bruin
Refs: - https://chat.openai.com/share/7020ce72-65fc-45ec-b7be-9d9d798a5f3f - https://github.com/SaschaWillems/Vulkan/issues/954 - https://github.com/haasn/libplacebo/issues/128 - https://github.com/KhronosGroup/Vulkan-Samples/issues/476
2024-02-19Allow for Vulkan build with Accelerate.Mathijs de Bruin
Closes #5304
2024-02-19cuda : ignore peer access already enabled errors (#5597)slaren
* cuda : ignore peer access already enabled errors * fix hip
2024-02-19make : pass CPPFLAGS directly to nvcc, not via -Xcompiler (#5598)Jared Van Bortel
2024-02-19examples : support minItems/maxItems in JSON grammar converter (#5039)nopperl
* support minLength and maxLength in JSON schema grammar converter * Update examples/json-schema-to-grammar.py --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-19llava : remove extra cont (#5587)Georgi Gerganov