summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-09-16Enable BUILD_SHARED_LIBS=ON on all Windows builds (#3215)IsaacDynamo
2023-09-16Enable build with CUDA 11.0 (make) (#3132)Vlad
* CUDA 11.0 fixes * Cleaner CUDA/host flags separation Also renamed GGML_ASSUME into GGML_CUDA_ASSUME
2023-09-16Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 ↵goerch
(#3170) * Fix für #2721 * Reenable tokenizer test for LLaMa * Add `console.cpp` dependency * Fix dependency to `common` * Fixing wrong fix. * Make console usage platform specific Work on compiler warnings. * Adapting makefile * Remove trailing whitespace * Adapting the other parts of the makefile * Fix typo. * Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 * Simplify logic * Add missing change... * Fix ugly compiler warning * llama_tokenize should accept strings containing NUL now * Adding huichen's test case
2023-09-15examples : add compiler version and target to build info (#2998)Cebtenzzre
2023-09-15check C++ code with -Wmissing-declarations (#3184)Cebtenzzre
2023-09-15fix build numbers by setting fetch-depth=0 (#3197)Cebtenzzre
2023-09-15llama : add support for StarCoder model architectures (#3187)Meng Zhang
* add placeholder of starcoder in gguf / llama.cpp * support convert starcoder weights to gguf * convert MQA to MHA * fix ffn_down name * add LLM_ARCH_STARCODER to llama.cpp * set head_count_kv = 1 * load starcoder weight * add max_position_embeddings * set n_positions to max_positioin_embeddings * properly load all starcoder params * fix head count kv * fix comments * fix vram calculation for starcoder * store mqa directly * add input embeddings handling * add TBD * working in cpu, metal buggy * cleanup useless code * metal : fix out-of-bounds access in soft_max kernels * llama : make starcoder graph build more consistent with others * refactor: cleanup comments a bit * add other starcoder models: 3B, 7B, 15B * support-mqa-directly * fix: remove max_position_embeddings, use n_train_ctx * Update llama.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Update llama.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Apply suggestions from code review Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * fix: switch to space from tab --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-15common : do not use GNU zero-length __VA_ARGS__ extension (#3195)Cebtenzzre
2023-09-15metal : fix bug in soft_max kernels (out-of-bounds access) (#3194)Georgi Gerganov
2023-09-15convert : make ftype optional in simple scripts (#3185)Cebtenzzre
2023-09-15sync : ggml (Metal F32 support + reduce ggml-alloc size) (#3192)Georgi Gerganov
* sync : ggml (Metal F32 support + reduce ggml-alloc size) ggml-ci * llama-bench : fix ggml_cpu_has_metal() duplicate function ggml-ci
2023-09-15cmake : fix building shared libs for clang (rocm) on windows (#3176)Engininja2
2023-09-15flake : use pkg-config instead of pkgconfig (#3188)Evgeny Kurnevsky
pkgconfig is an alias, it got removed from nixpkgs: https://github.com/NixOS/nixpkgs/blob/295a5e1e2bacd6e246db8b2bb35d2a9415883224/pkgs/top-level/aliases.nix#L1408
2023-09-15metal : relax conditions on fast matrix multiplication kernel (#3168)Georgi Gerganov
* metal : relax conditions on fast matrix multiplication kernel * metal : revert the concurrnecy change because it was wrong * llama : remove experimental stuff
2023-09-15cmake : fix llama.h location when built outside of root directory (#3179)Andrei
2023-09-15ci : Cloud-V for RISC-V builds (#3160)Ali Tariq
* Added Cloud-V File * Replaced Makefile with original one --------- Co-authored-by: moiz.hussain <moiz.hussain@10xengineers.ai>
2023-09-15llama : remove mtest (#3177)Roland
* Remove mtest * remove from common/common.h and examples/main/main.cpp
2023-09-14llama : make quantize example up to 2.7x faster (#3115)Cebtenzzre
2023-09-14flake : allow $out/include to already exist (#3175)jneem
2023-09-14cmake : compile ggml-rocm with -fpic when building shared library (#3158)Andrei
2023-09-14flake : include llama.h in nix output (#3159)Asbjørn Olling
2023-09-14make : fix clang++ detection, move some definitions to CPPFLAGS (#3155)Cebtenzzre
* make : fix clang++ detection * make : fix compiler definitions outside of CPPFLAGS
2023-09-14CI: add FreeBSD & simplify CUDA windows (#3053)Alon
* add freebsd to ci * bump actions/checkout to v3 * bump cuda 12.1.0 -> 12.2.0 * bump Jimver/cuda-toolkit version * unify and simplify "Copy and pack Cuda runtime" * install only necessary cuda sub packages
2023-09-14falcon : use stated vocab size (#2914)akawrykow
2023-09-14cmake : add relocatable Llama package (#2960)bandoti
* Keep static libs and headers with install * Add logic to generate Config package * Use proper build info * Add llama as import library * Prefix target with package name * Add example project using CMake package * Update README * Update README * Remove trailing whitespace
2023-09-14docker : add gpu image CI builds (#3103)dylan
Enables the GPU enabled container images to be built and pushed alongside the CPU containers. Co-authored-by: canardleteer <eris.has.a.dad+github@gmail.com>
2023-09-14gguf-py : support identity operation in TensorNameMap (#3095)Kerfuffle
Make try_suffixes keyword param optional.
2023-09-14feature : support Baichuan serial models (#3009)jameswu2014
2023-09-14speculative : add heuristic algorithm (#3006)Leng Yue
* Add heuristic algo for speculative * Constrain minimum n_draft to 2 * speculative : improve heuristic impl * speculative : be more rewarding upon guessing max drafted tokens * speculative : fix typos --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-13whisper : tokenizer fix + re-enable tokenizer test for LLaMa (#3096)goerch
* Fix für #2721 * Reenable tokenizer test for LLaMa * Add `console.cpp` dependency * Fix dependency to `common` * Fixing wrong fix. * Make console usage platform specific Work on compiler warnings. * Adapting makefile * Remove trailing whitespace * Adapting the other parts of the makefile * Fix typo.
2023-09-13cmake : add a compiler flag check for FP16 format (#3086)Tristan Ross
2023-09-13CUDA: mul_mat_q RDNA2 tunings (#2910)Johannes Gäßler
* CUDA: mul_mat_q RDNA2 tunings * Update ggml-cuda.cu Co-authored-by: Henri Vasserman <henv@hot.ee> --------- Co-authored-by: Henri Vasserman <henv@hot.ee>
2023-09-13speculative: add --n-gpu-layers-draft option (#3063)FK
2023-09-12arm64 support for windows (#3007)Eric Sommerlade
Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>
2023-09-13CUDA: fix LoRAs (#3130)Johannes Gäßler
2023-09-11CUDA: fix mul_mat_q not used for output tensor (#3127)Johannes Gäßler
2023-09-11CUDA: lower GPU latency + fix Windows performance (#3110)Johannes Gäßler
2023-09-11cmake : support build for iOS/tvOS (#3116)Jhen-Jie Hong
* cmake : support build for iOS/tvOS * ci : add iOS/tvOS build into macOS-latest-cmake * ci : split ios/tvos jobs
2023-09-11CUDA: add device number to error messages (#3112)Johannes Gäßler
2023-09-11metal : PP speedup (#3084)Kawrakow
* Minor speed gains for all quantization types * metal: faster kernel_scale via float4 * Various other speedups for "small" kernels * metal: faster soft_max vial float4 * metal: faster diagonal infinity Although, to me it looks like one should simply fuse scale + diagnonal infinity + soft_max on the KQtensor. * Another faster f16 x f32 matrix multiply kernel * Reverting the diag infinity change It does work for PP, but somehow it fails for TG. Need to look more into it. * metal: add back faster diagonal infinity This time more carefully * metal : minor (readibility) --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-10convert: remove most of the n_mult usage in convert.py (#3098)Erik Scholz
2023-09-09metal : support for Swift (#3078)kchro3
* Metal support for Swift * update * add a toggle for arm/arm64 * set minimum versions for all platforms * update to use newLibraryWithURL * bump version Co-authored-by: Jhen-Jie Hong <iainst0409@gmail.com> --------- Co-authored-by: Jhen-Jie Hong <iainst0409@gmail.com>
2023-09-09metal : support build for iOS/tvOS (#3089)Jhen-Jie Hong
2023-09-08flake : add train-text-from-scratch to flake.nix (#3042)takov751
2023-09-08readme : fix typo (#3043)Ikko Eltociear Ashimine
* readme : fix typo acceleation -> acceleration * Update README.md --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-08metal : Q3_K speedup (#2995)Kawrakow
* Slightly faster Q3_K and Q5_K on metal * Another Q3_K speedup on metal Combined with previous commit, we are now +9.6% for TG. PP is not affected as this happens via the matrix multiplication templates. * Slowly progressing on Q3_K on metal We are now 13% faster than master * nother small improvement for Q3_K on metal --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-09-08examples : make n_ctx warning work again (#3066)Cebtenzzre
This was broken by commit e36ecdcc ("build : on Mac OS enable Metal by default (#2901)").
2023-09-08readme : update hot tpoicsGeorgi Gerganov
2023-09-08sync : ggml (CUDA GLM RoPE + POSIX) (#3082)Georgi Gerganov
ggml-ci
2023-09-08build : do not use _GNU_SOURCE gratuitously (#2035)Przemysław Pawełczyk
* Do not use _GNU_SOURCE gratuitously. What is needed to build llama.cpp and examples is availability of stuff defined in The Open Group Base Specifications Issue 6 (https://pubs.opengroup.org/onlinepubs/009695399/) known also as Single Unix Specification v3 (SUSv3) or POSIX.1-2001 + XSI extensions, plus some stuff from BSD that is not specified in POSIX.1. Well, that was true until NUMA support was added recently, so enable GNU libc extensions for Linux builds to cover that. Not having feature test macros in source code gives greater flexibility to those wanting to reuse it in 3rd party app, as they can build it with FTMs set by Makefile here or other FTMs depending on their needs. It builds without issues in Alpine (musl libc), Ubuntu (glibc), MSYS2. * make : enable Darwin extensions for macOS to expose RLIMIT_MEMLOCK * make : enable BSD extensions for DragonFlyBSD to expose RLIMIT_MEMLOCK * make : use BSD-specific FTMs to enable alloca on BSDs * make : fix OpenBSD build by exposing newer POSIX definitions * cmake : follow recent FTM improvements from Makefile