summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-02-27Makefile: use variables for cublas (#5689)le.chang
* make: use arch variable for cublas * fix UNAME_M * check opt first --------- Co-authored-by: lindeer <le.chang118@gmail.com>
2024-02-26fix server hangs on empty prompt (#5733)Xuan Son Nguyen
2024-02-26Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization ↵Kawrakow
range (#5721) * Adding IQ2_S and IQ2_M as a single cumulative commit * Update examples/quantize/quantize.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-26CUDA: fix DEBUG_CUDA_MALLOC (#5729)Johannes Gäßler
2024-02-26readme : update ui list (#5731)Artem
* Add LLMFarm (ui for iOS) to list
2024-02-26[SYCL] Add support for soft_max ALiBi (#5639)AidanBeltonS
* Add support for bias * Update pre-processor * rm commented code * fix format * fix CI --------- Co-authored-by: Abhilash Majumder <30946547+abhilash1910@users.noreply.github.com>
2024-02-26unicode : reuse iterator (#5726)Georgi Gerganov
2024-02-26server: CI fix trailing space (#5728)Pierrick Hymbert
2024-02-26server: CI tests reduce build matrix (#5725)Pierrick Hymbert
2024-02-26llama : fix Gemma rope type (#5691)Georgi Gerganov
2024-02-25flake.lock: Updategithub-actions[bot]
Flake lock file updates: • Updated input 'nixpkgs': 'github:NixOS/nixpkgs/5863c27340ba4de8f83e7e3c023b9599c3cb3c80' (2024-02-16) → 'github:NixOS/nixpkgs/cbc4211f0afffe6dfd2478a62615dd5175a13f9a' (2024-02-23)
2024-02-25server: tests - slow inference causes timeout on the CI (#5715)Pierrick Hymbert
* server: tests - longer inference timeout for CI
2024-02-25server: docs - refresh and tease a little bit more the http server (#5718)Pierrick Hymbert
* server: docs - refresh and tease a little bit more the http server * Rephrase README.md server doc Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Update examples/server/README.md Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Update examples/server/README.md Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Update README.md --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-25llama : refactor k-shift implementation + KV defragmentation (#5691)Georgi Gerganov
* llama : refactor k-shift implementation ggml-ci * llama : rename llama_kv_cache_seq_shift to llama_kv_cache_seq_add * llama : cont k-shift refactoring + normalize type names ggml-ci * minor : fix MPI builds * llama : reuse n_rot from the build context ggml-ci * llama : revert enum name changes from this PR ggml-ci * llama : update llama_rope_type * llama : add comment about rope values * llama : fix build * passkey : apply kv cache updates explicitly ggml-ci * llama : change name to llama_kv_cache_update() * llama : add llama_kv_cache_seq_pos_max() * passkey : fix llama_kv_cache_seq_pos_max() usage * llama : some llama_kv_cell simplifications * llama : add llama_kv_cache_compress (EXPERIMENTAL) * llama : add alternative KV cache merging (EXPERIMENTAL) * llama : add llama_kv_cache_defrag * llama : comments * llama : remove llama_kv_cache_compress will add in a separate PR ggml-ci * llama : defragment via non-overlapping moves * llama : ggml_graph based defrag implementation ggml-ci * llama : switch the loop order in build_defrag * llama : add comments
2024-02-25server : fix crash when system prompt is bigger than batch size (#5714)compilade
The system prompt is now decoded in batches. * server : fix off-by-one n_past when start of prompt matches whole cache The tokens right after the matching part would otherwise skip a pos value.
2024-02-25ggml-quants : provide ggml_vqtbl1q_u8 for 64bit compatibility (#5711)Radosław Gryta
* [ggml-quants] Provide ggml_vqtbl1q_u8 for 64bit compatibility vqtbl1q_u8 is not part of arm v7 neon library * [android-example] Remove abi filter after arm v7a fix * [github-workflows] Do not skip Android armeabi-v7a build
2024-02-25make : fix nvcc version is empty (#5713)kwin1412
fix nvcc version is empty
2024-02-25readme : add Msty to UI list (#5618)Ashok Gelal
2024-02-25server: logs - unified format and --log-format option (#5700)Pierrick Hymbert
* server: logs - always use JSON logger, add add thread_id in message, log task_id and slot_id * server : skip GH copilot requests from logging * server : change message format of server_log() * server : no need to repeat log in comment * server : log style consistency * server : fix compile warning * server : fix tests regex patterns on M2 Ultra * server: logs: PR feedback on log level * server: logs: allow to choose log format in json or plain text * server: tests: output server logs in text * server: logs switch init logs to server logs macro * server: logs ensure value json value does not raised error * server: logs reduce level VERBOSE to VERB to max 4 chars * server: logs lower case as other log messages * server: logs avoid static in general Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * server: logs PR feedback: change text log format to: LEVEL [function_name] message | additional=data --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-25server: concurrency fix + monitoring - add /metrics prometheus compatible ↵Pierrick Hymbert
endpoint (#5708) * server: monitoring - add /metrics prometheus compatible endpoint * server: concurrency issue, when 2 task are waiting for results, only one call thread is notified * server: metrics - move to a dedicated struct
2024-02-25cmake : fix compilation for Android armeabi-v7a (#5702)Radosław Gryta
2024-02-25code : normalize enum names (#5697)Georgi Gerganov
* coda : normalize enum names ggml-ci * code : cont * code : cont
2024-02-25py : fix StableLM conversion after config.json changes (#5703)Anas Ahouzi
* Fix issues during StableLM models conversion * Fix hard coded layer_norm_eps * Support layer_norm_eps for LlavaStableLM Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * Add missing parenthesis Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * Support rotary_factor for LlavaStableLM Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * fix typo * Add StableLMEpochForCausalLM for safety Co-authored-by: compilade <113953597+compilade@users.noreply.github.com> * Add StableLMEpochForCausalLM for safety 2 Co-authored-by: compilade <113953597+compilade@users.noreply.github.com> --------- Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> Co-authored-by: Jared Van Bortel <jared@nomic.ai> Co-authored-by: compilade <113953597+compilade@users.noreply.github.com>
2024-02-24server: continue to update other slots on embedding concurrent request (#5699)Pierrick Hymbert
* server: #5655 - continue to update other slots on embedding concurrent request. * server: tests: add multi users embeddings as fixed * server: tests: adding OAI compatible embedding concurrent endpoint * server: tests: adding OAI compatible embedding with multiple inputs
2024-02-24IQ3_S: a much better alternative to Q3_K (#5676)Kawrakow
* iq4_nl: squash commits for easier rebase * Basics (quantize, dequantize) * CUDA dequantize and dot product * Slightly faster CUDA dot product (120 t/s) * Switch to 6-bit scales * Scalar dot product * AVX2 dot product * ARM_NEON dot product * Works on metal, but still slow * Slightly better Metal dot product * Another small Metal improvement * Metal dot product is getting there * Faster CUDA dot product * Add 1/8 ffn_down layers as Q5_K when no imatrix has been provided * Report the actual bpw * Add _xs mix that is 4.05 bpw for non-MoE models * Remove IQ4_XS for now, slightly adjust kvalues_iq4nl * AVX2 dot product uses Q8_0 instead of Q8_K * Add to test-backend-ops * Minor fix * Also use use Q5_K for attn_output in MoE models * Fixes after merging latest master * Switching to blocks of 32 * AVX2 for blocks of 32 * Scaler dot product for blocks of 32 * ARM_NEON dot product for blocks of 32 * Metal kernels for blocks of 32 * Slightly faster Metal kernels * Resurrecting iq3_xs After all the experimentation, nothing was better than this. * Minor PPL improvement via a block scale fudge factor * Minor improvement via 3 neighbours * iq3_xs: working scalar and AVX2 dot products * iq3_xs: ARM_NEON dot product - works but extremely slow (10 t/s) * iq3_xs: working Metal implementation * Adding IQ3_M - IQ3_XS mix with mostly Q4_K * iiq3_xs: a 3.4375 bpw variant * iq3_xs: make CUDA work for new version * iq3_xs: make scalar and AVX2 work for new version * iq3_s: make ARM_NEON work with new version * iq3_xs: make new version work on metal Performance is very similar to Q3_K_S * iq3_xs: tiny Metal speed improvement * iq3_xs: tiny Metal speed improvement * Fix stupid warning * Q3_K_XS now uses a mix of IQ3_XS and IQ3_XXS * iq3_xs: rename to iq3_s * iq3_s: make tests pass * Move Q3_K_XS mix to 3.25 bpw * Attempt to fix failing tests * Another attempt to fix the Windows builds * Attempt to fix ROCm * ROCm again * iq3_s: partial fix for QK_K = 64 * iq3_s: make it work on metal for QK_K = 64 Pleasent surprise: the coding was super-block size independent, so all it took was to delete some QK_K == 256 guards. * Will this fix ROCm? --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-02-24server: init functional tests (#5566)Pierrick Hymbert
* server: tests: init scenarios - health and slots endpoints - completion endpoint - OAI compatible chat completion requests w/ and without streaming - completion multi users scenario - multi users scenario on OAI compatible endpoint with streaming - multi users with total number of tokens to predict exceeds the KV Cache size - server wrong usage scenario, like in Infinite loop of "context shift" #3969 - slots shifting - continuous batching - embeddings endpoint - multi users embedding endpoint: Segmentation fault #5655 - OpenAI-compatible embeddings API - tokenize endpoint - CORS and api key scenario * server: CI GitHub workflow --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-23server : add KV cache quantization options (#5684)AlpinDale
2024-02-23convert : fix missing ftype for gemma (#5690)Jared Van Bortel
2024-02-22mpt : do not duplicate token_embd.weight on disk (#5670)Jared Van Bortel
2024-02-22gemma : use more bits for the token_embd.weight tensor (#5650)Georgi Gerganov
* gemma : use Q8_0 for the token_embd.weight tensor * llama : quantize token_embd.weight using output type
2024-02-22py : add Gemma conversion from HF models (#5647)Georgi Gerganov
* py : add gemma conversion from HF models * Update convert-hf-to-gguf.py Co-authored-by: Aarni Koskela <akx@iki.fi> * Update convert-hf-to-gguf.py Co-authored-by: Aarni Koskela <akx@iki.fi> * Update convert-hf-to-gguf.py Co-authored-by: Jared Van Bortel <jared@nomic.ai> --------- Co-authored-by: Aarni Koskela <akx@iki.fi> Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-02-22ggml : always define ggml_fp16_t as uint16_t (#5666)Georgi Gerganov
* ggml : always define ggml_fp16_t as uint16_t ggml-ci * ggml : cont ggml-ci * ggml : cont * ggml : cont ggml-ci * ggml : cont ggml-ci * cuda : no longer ggml headers last ggml-ci * ggml : fix q6_K FP16 -> FP32 conversion ggml-ci * ggml : more FP16 -> FP32 conversion fixes ggml-ci
2024-02-22sync : ggmlGeorgi Gerganov
2024-02-22ggml : 32-bit arm compat (whisper/1891)Georgi Gerganov
* ggml : 32-bit arm compat * ggml : add ggml_vqtbl1q_s8 impl * ggml : cont
2024-02-22nix: init singularity and docker images (#5056)Someone
Exposes a few attributes demonstrating how to build [singularity](https://docs.sylabs.io/guides/latest/user-guide/)/[apptainer](https://apptainer.org/) and Docker images re-using llama.cpp's Nix expression. Built locally on `x86_64-linux` with `nix build github:someoneserge/llama.cpp/feat/nix/images#llamaPackages.{docker,docker-min,sif,llama-cpp}` and it's fast and effective.
2024-02-22py : minor fixes (#5668)Georgi Gerganov
2024-02-22Add Gemma chat template (#5665)Xuan Son Nguyen
* add gemma chat template * gemma: only apply system_prompt on non-model message
2024-02-22workflows: nix: hardcode cachix ids, build unconditionally (#5663)Someone
GitHub does not expose environment and repository variables to PRs coming from forks implies that we've been disabling the Nix CI actions for most PRs. The `if:` also didn't make much sense, because we can always pull from cachix, and there's no point (albeit no risk either) in pushing cache for the untrusted code.
2024-02-22minor : fix trailing whitespace (#5638)Georgi Gerganov
2024-02-22readme : update hot topicsGeorgi Gerganov
2024-02-22server : fallback to chatml, add AlphaMonarch chat template (#5628)Xuan Son Nguyen
* server: fallback to chatml * add new chat template * server: add AlphaMonarch to test chat template * server: only check model template if there is no custom tmpl * remove TODO
2024-02-22server : clarify some params in the docs (#5640)Alexey Parfenov
2024-02-22mpt : add optional bias tensors (#5638)Dat Quoc Nguyen
Update for MPT with optional bias parameters: to work with PhoGPT and SEA-LION models that were pre-trained with 'bias'.
2024-02-22llama : fix loading models with shared tok_embd and output (#5651)slaren
ggml-ci
2024-02-22Add docs for llama_chat_apply_template (#5645)Xuan Son Nguyen
* add docs for llama_chat_apply_template * fix typo
2024-02-21llama : fix session save/load with quantized KV (#5649)slaren
2024-02-21gemma : allow offloading the output tensor (#5646)slaren
2024-02-21examples : do not assume BOS when shifting context (#5622)Jared Van Bortel
2024-02-21sync : ggmlGeorgi Gerganov
2024-02-21server: health: fix race condition on slots data using tasks queue (#5634)Pierrick Hymbert
* server: health: fix race condition on slots data using tasks queue * server: health: * include_slots only if slots_endpoint * fix compile warning task.target_id not initialized.