summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-09-28ci : disable freeBSD builds due to lack of VMs (#3381)Georgi Gerganov
2023-09-28llama : custom attention mask + parallel decoding + no context swaps (#3228)Georgi Gerganov
* tests : verify that RoPE is "additive" * llama : replace ggml_diag_mask_inf with ggml_add (custom -inf mask) * ggml : ggml_rope now takes a vector with positions instead of n_past * metal : add rope_f16 kernel + optimize cpy kernels * llama : unified KV cache + batch inference API * llama : add new llama_decode() API that works with llama_batch * llama : add cell_max heuristic for more efficient kv_cache * llama : extend llama_kv_cache API * llama : more robust cell_max heuristic + wip shift * metal : disable concurrency optimization * llama : add llama_kv_cache_shift_seq + no more context swaps * llama : apply K-cache roping for Falcon and Baichuan * speculative : fix KV cache management * parallel : example for serving multiple users in parallel * parallel : disable hot-plug to avoid cache fragmentation * fixes : speculative KV cache + llama worst-case graph * llama : extend batch API to select which logits to output * llama : fix worst case graph build * ggml-cuda : update rope implementation for parallel decoding (#3254) * ggml-cuda : update rope implementation for parallel decoding * better solution for p0 computation * fix rope * simpler rope implementation --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * make : add parallel to build + fix static functions in llama.cpp * simple : fix token counting * parallel : various improvements * llama : fix cell_max logic + rename functions * parallel : try smaller batches when the KV cache is fragmented * parallel : fix sequence termination criteria * llama : silence errors KV cache errors * parallel : remove new line from prompt * parallel : process system prompt once + configurable paramters + llama API * parallel : remove question with short answers * parallel : count cache misses * parallel : print misses on each request * parallel : minor * llama : fix n_kv to never become 0 * parallel : rename hot-plug to continuous-batching * llama : improve llama_batch API + simplify parallel example * simple : add parallel decoding support * simple : improve comments + free batch * ggml-cuda : add rope f16, restore performance with parallel decoding (#3272) * ggml-cuda : add rope f16, restore performance * offload KQ_mask with all models * fix rope shift --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * llama : disable MPI for now ggml-ci * train : make KQ_pos memory buffer permanent via dummy scale op * ggml : revert change to ggml_cpy, add ggml_cont_Nd instead (#3275) ggml-ci * parallel : fix bug (extra BOS) + smaller token_prev array * parallel : fix cases where the input prompts can overflow the batch * parallel : add disabled experimental batch chunking in powers of two * llama : llama.h formatting + comments * simple : add README.md * llama : fix kv cache heuristic when context is less than 32 * parallel : fix crash when `-n -1` * llama : simplify returns if/else branches * metal : use mm kernels for batch size > 2 * examples : utilize new llama_get_logits_ith() * examples : add example for batched decoding * examples : do not eval prompt 2 times (close #3348) * server : clear the KV cache beyond n_past before llama_decode * server : avoid context swaps by shifting the KV cache --------- Co-authored-by: slaren <slarengh@gmail.com>
2023-09-28docs : mark code as Bash (#3375)Kevin Ji
2023-09-28readme : add Mistral AI release 0.1 (#3362)Pierre Alexandre SCHEMBRI
2023-09-28ggml-cuda : perform cublas fp16 matrix multiplication as fp16 (#3370)slaren
* ggml-cuda : perform cublas fp16 matrix multiplication as fp16 * try to fix rocm build * restrict fp16 mat mul to volta and up
2023-09-27convert : remove bug in convert.py permute function (#3364)Zhang Peiyuan
2023-09-27make-ggml.py : compatibility with more models and GGUF (#3290)Richard Roberson
* Resync my fork with new llama.cpp commits * examples : rename to use dash instead of underscore * New model conversions --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-27gguf : fix a few general keys (#3341)Cebtenzzre
2023-09-27metal : reusing llama.cpp logging (#3152)Rickard Hallerbäck
* metal : reusing llama.cpp logging * cmake : build fix * metal : logging callback * metal : logging va_args memory fix * metal : minor cleanup * metal : setting function like logging macro to capital letters * llama.cpp : trailing whitespace fix * ggml : log level enum used by llama * Makefile : cleanup ggml-metal recipe * ggml : ggml_log_callback typedef * ggml : minor --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-27build : add ACCELERATE_NEW_LAPACK to fix warning on macOS Sonoma (#3342)Jag Chadha
2023-09-27readme : add some recent perplexity and bpw measurements to READMES, link ↵BarfingLemurs
for k-quants (#3340) * Update README.md * Update README.md * Update README.md with k-quants bpw measurements
2023-09-25cmake : fix build-info.h on MSVC (#3309)DAN™
2023-09-25docs: Fix typo CLBlast_DIR var. (#3330)2f38b454
2023-09-25nix : add cuda, use a symlinked toolkit for cmake (#3202)Erik Scholz
2023-09-23llama-bench : add README (#3317)slaren
* llama-bench : add README * minor edit
2023-09-23examples : fix RoPE defaults to match PR #3240 (#3315)Cebtenzzre
2023-09-22scripts : use `/usr/bin/env` in shebang (#3313)Kevin Ji
2023-09-21Update README.md (#3289)Lee Drake
* Update README.md * Update README.md Co-authored-by: slaren <slarengh@gmail.com> --------- Co-authored-by: slaren <slarengh@gmail.com>
2023-09-21ggml-opencl.cpp: Make private functions static (#3300)shibe2
2023-09-21zig : fix for updated c lib (#3259)Edward Taylor
2023-09-21embedding : update README.md (#3224)yuiseki
2023-09-21CUDA: use only 1 thread if fully offloaded (#2915)Johannes Gäßler
2023-09-20readme : update hot topicsGeorgi Gerganov
2023-09-20llama : allow gguf RoPE keys to be overridden with defaults (#3240)Cebtenzzre
2023-09-20benchmark-matmult : do not use integer abs() on a float (#3277)Cebtenzzre
2023-09-20flake : Restore default package's buildInputs (#3262)kang
2023-09-20CI: FreeBSD fix (#3258)Alon
* - freebsd ci: use qemu
2023-09-20examples : fix benchmark-matmult (#1554)Georgi Gerganov
The precision for Q4_0 has degraded since #1508
2023-09-18make : restore build-info.h dependency for several targets (#3205)Cebtenzzre
2023-09-18ci : switch cudatoolkit install on windows to networked (#3236)Erik Scholz
2023-09-17CUDA: fix peer access logic (#3231)Johannes Gäßler
2023-09-17CUDA: enable peer access between devices (#2470)Johannes Gäßler
2023-09-17llama.cpp : show model size and BPW on load (#3223)slaren
2023-09-17CUDA: fix scratch malloced on non-main device (#3220)Johannes Gäßler
2023-09-16Enable BUILD_SHARED_LIBS=ON on all Windows builds (#3215)IsaacDynamo
2023-09-16Enable build with CUDA 11.0 (make) (#3132)Vlad
* CUDA 11.0 fixes * Cleaner CUDA/host flags separation Also renamed GGML_ASSUME into GGML_CUDA_ASSUME
2023-09-16Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 ↵goerch
(#3170) * Fix für #2721 * Reenable tokenizer test for LLaMa * Add `console.cpp` dependency * Fix dependency to `common` * Fixing wrong fix. * Make console usage platform specific Work on compiler warnings. * Adapting makefile * Remove trailing whitespace * Adapting the other parts of the makefile * Fix typo. * Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 * Simplify logic * Add missing change... * Fix ugly compiler warning * llama_tokenize should accept strings containing NUL now * Adding huichen's test case
2023-09-15examples : add compiler version and target to build info (#2998)Cebtenzzre
2023-09-15check C++ code with -Wmissing-declarations (#3184)Cebtenzzre
2023-09-15fix build numbers by setting fetch-depth=0 (#3197)Cebtenzzre
2023-09-15llama : add support for StarCoder model architectures (#3187)Meng Zhang
* add placeholder of starcoder in gguf / llama.cpp * support convert starcoder weights to gguf * convert MQA to MHA * fix ffn_down name * add LLM_ARCH_STARCODER to llama.cpp * set head_count_kv = 1 * load starcoder weight * add max_position_embeddings * set n_positions to max_positioin_embeddings * properly load all starcoder params * fix head count kv * fix comments * fix vram calculation for starcoder * store mqa directly * add input embeddings handling * add TBD * working in cpu, metal buggy * cleanup useless code * metal : fix out-of-bounds access in soft_max kernels * llama : make starcoder graph build more consistent with others * refactor: cleanup comments a bit * add other starcoder models: 3B, 7B, 15B * support-mqa-directly * fix: remove max_position_embeddings, use n_train_ctx * Update llama.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Update llama.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Apply suggestions from code review Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * fix: switch to space from tab --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-15common : do not use GNU zero-length __VA_ARGS__ extension (#3195)Cebtenzzre
2023-09-15metal : fix bug in soft_max kernels (out-of-bounds access) (#3194)Georgi Gerganov
2023-09-15convert : make ftype optional in simple scripts (#3185)Cebtenzzre
2023-09-15sync : ggml (Metal F32 support + reduce ggml-alloc size) (#3192)Georgi Gerganov
* sync : ggml (Metal F32 support + reduce ggml-alloc size) ggml-ci * llama-bench : fix ggml_cpu_has_metal() duplicate function ggml-ci
2023-09-15cmake : fix building shared libs for clang (rocm) on windows (#3176)Engininja2
2023-09-15flake : use pkg-config instead of pkgconfig (#3188)Evgeny Kurnevsky
pkgconfig is an alias, it got removed from nixpkgs: https://github.com/NixOS/nixpkgs/blob/295a5e1e2bacd6e246db8b2bb35d2a9415883224/pkgs/top-level/aliases.nix#L1408
2023-09-15metal : relax conditions on fast matrix multiplication kernel (#3168)Georgi Gerganov
* metal : relax conditions on fast matrix multiplication kernel * metal : revert the concurrnecy change because it was wrong * llama : remove experimental stuff
2023-09-15cmake : fix llama.h location when built outside of root directory (#3179)Andrei
2023-09-15ci : Cloud-V for RISC-V builds (#3160)Ali Tariq
* Added Cloud-V File * Replaced Makefile with original one --------- Co-authored-by: moiz.hussain <moiz.hussain@10xengineers.ai>