Age | Commit message (Collapse) | Author | |
---|---|---|---|
2023-09-27 | readme : add some recent perplexity and bpw measurements to READMES, link ↵ | BarfingLemurs | |
for k-quants (#3340) * Update README.md * Update README.md * Update README.md with k-quants bpw measurements | |||
2023-09-25 | cmake : fix build-info.h on MSVC (#3309) | DAN™ | |
2023-09-25 | docs: Fix typo CLBlast_DIR var. (#3330) | 2f38b454 | |
2023-09-25 | nix : add cuda, use a symlinked toolkit for cmake (#3202) | Erik Scholz | |
2023-09-23 | llama-bench : add README (#3317) | slaren | |
* llama-bench : add README * minor edit | |||
2023-09-23 | examples : fix RoPE defaults to match PR #3240 (#3315) | Cebtenzzre | |
2023-09-22 | scripts : use `/usr/bin/env` in shebang (#3313) | Kevin Ji | |
2023-09-21 | Update README.md (#3289) | Lee Drake | |
* Update README.md * Update README.md Co-authored-by: slaren <slarengh@gmail.com> --------- Co-authored-by: slaren <slarengh@gmail.com> | |||
2023-09-21 | ggml-opencl.cpp: Make private functions static (#3300) | shibe2 | |
2023-09-21 | zig : fix for updated c lib (#3259) | Edward Taylor | |
2023-09-21 | embedding : update README.md (#3224) | yuiseki | |
2023-09-21 | CUDA: use only 1 thread if fully offloaded (#2915) | Johannes Gäßler | |
2023-09-20 | readme : update hot topics | Georgi Gerganov | |
2023-09-20 | llama : allow gguf RoPE keys to be overridden with defaults (#3240) | Cebtenzzre | |
2023-09-20 | benchmark-matmult : do not use integer abs() on a float (#3277) | Cebtenzzre | |
2023-09-20 | flake : Restore default package's buildInputs (#3262) | kang | |
2023-09-20 | CI: FreeBSD fix (#3258) | Alon | |
* - freebsd ci: use qemu | |||
2023-09-20 | examples : fix benchmark-matmult (#1554) | Georgi Gerganov | |
The precision for Q4_0 has degraded since #1508 | |||
2023-09-18 | make : restore build-info.h dependency for several targets (#3205) | Cebtenzzre | |
2023-09-18 | ci : switch cudatoolkit install on windows to networked (#3236) | Erik Scholz | |
2023-09-17 | CUDA: fix peer access logic (#3231) | Johannes Gäßler | |
2023-09-17 | CUDA: enable peer access between devices (#2470) | Johannes Gäßler | |
2023-09-17 | llama.cpp : show model size and BPW on load (#3223) | slaren | |
2023-09-17 | CUDA: fix scratch malloced on non-main device (#3220) | Johannes Gäßler | |
2023-09-16 | Enable BUILD_SHARED_LIBS=ON on all Windows builds (#3215) | IsaacDynamo | |
2023-09-16 | Enable build with CUDA 11.0 (make) (#3132) | Vlad | |
* CUDA 11.0 fixes * Cleaner CUDA/host flags separation Also renamed GGML_ASSUME into GGML_CUDA_ASSUME | |||
2023-09-16 | Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 ↵ | goerch | |
(#3170) * Fix für #2721 * Reenable tokenizer test for LLaMa * Add `console.cpp` dependency * Fix dependency to `common` * Fixing wrong fix. * Make console usage platform specific Work on compiler warnings. * Adapting makefile * Remove trailing whitespace * Adapting the other parts of the makefile * Fix typo. * Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 * Simplify logic * Add missing change... * Fix ugly compiler warning * llama_tokenize should accept strings containing NUL now * Adding huichen's test case | |||
2023-09-15 | examples : add compiler version and target to build info (#2998) | Cebtenzzre | |
2023-09-15 | check C++ code with -Wmissing-declarations (#3184) | Cebtenzzre | |
2023-09-15 | fix build numbers by setting fetch-depth=0 (#3197) | Cebtenzzre | |
2023-09-15 | llama : add support for StarCoder model architectures (#3187) | Meng Zhang | |
* add placeholder of starcoder in gguf / llama.cpp * support convert starcoder weights to gguf * convert MQA to MHA * fix ffn_down name * add LLM_ARCH_STARCODER to llama.cpp * set head_count_kv = 1 * load starcoder weight * add max_position_embeddings * set n_positions to max_positioin_embeddings * properly load all starcoder params * fix head count kv * fix comments * fix vram calculation for starcoder * store mqa directly * add input embeddings handling * add TBD * working in cpu, metal buggy * cleanup useless code * metal : fix out-of-bounds access in soft_max kernels * llama : make starcoder graph build more consistent with others * refactor: cleanup comments a bit * add other starcoder models: 3B, 7B, 15B * support-mqa-directly * fix: remove max_position_embeddings, use n_train_ctx * Update llama.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Update llama.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Apply suggestions from code review Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * fix: switch to space from tab --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> | |||
2023-09-15 | common : do not use GNU zero-length __VA_ARGS__ extension (#3195) | Cebtenzzre | |
2023-09-15 | metal : fix bug in soft_max kernels (out-of-bounds access) (#3194) | Georgi Gerganov | |
2023-09-15 | convert : make ftype optional in simple scripts (#3185) | Cebtenzzre | |
2023-09-15 | sync : ggml (Metal F32 support + reduce ggml-alloc size) (#3192) | Georgi Gerganov | |
* sync : ggml (Metal F32 support + reduce ggml-alloc size) ggml-ci * llama-bench : fix ggml_cpu_has_metal() duplicate function ggml-ci | |||
2023-09-15 | cmake : fix building shared libs for clang (rocm) on windows (#3176) | Engininja2 | |
2023-09-15 | flake : use pkg-config instead of pkgconfig (#3188) | Evgeny Kurnevsky | |
pkgconfig is an alias, it got removed from nixpkgs: https://github.com/NixOS/nixpkgs/blob/295a5e1e2bacd6e246db8b2bb35d2a9415883224/pkgs/top-level/aliases.nix#L1408 | |||
2023-09-15 | metal : relax conditions on fast matrix multiplication kernel (#3168) | Georgi Gerganov | |
* metal : relax conditions on fast matrix multiplication kernel * metal : revert the concurrnecy change because it was wrong * llama : remove experimental stuff | |||
2023-09-15 | cmake : fix llama.h location when built outside of root directory (#3179) | Andrei | |
2023-09-15 | ci : Cloud-V for RISC-V builds (#3160) | Ali Tariq | |
* Added Cloud-V File * Replaced Makefile with original one --------- Co-authored-by: moiz.hussain <moiz.hussain@10xengineers.ai> | |||
2023-09-15 | llama : remove mtest (#3177) | Roland | |
* Remove mtest * remove from common/common.h and examples/main/main.cpp | |||
2023-09-14 | llama : make quantize example up to 2.7x faster (#3115) | Cebtenzzre | |
2023-09-14 | flake : allow $out/include to already exist (#3175) | jneem | |
2023-09-14 | cmake : compile ggml-rocm with -fpic when building shared library (#3158) | Andrei | |
2023-09-14 | flake : include llama.h in nix output (#3159) | Asbjørn Olling | |
2023-09-14 | make : fix clang++ detection, move some definitions to CPPFLAGS (#3155) | Cebtenzzre | |
* make : fix clang++ detection * make : fix compiler definitions outside of CPPFLAGS | |||
2023-09-14 | CI: add FreeBSD & simplify CUDA windows (#3053) | Alon | |
* add freebsd to ci * bump actions/checkout to v3 * bump cuda 12.1.0 -> 12.2.0 * bump Jimver/cuda-toolkit version * unify and simplify "Copy and pack Cuda runtime" * install only necessary cuda sub packages | |||
2023-09-14 | falcon : use stated vocab size (#2914) | akawrykow | |
2023-09-14 | cmake : add relocatable Llama package (#2960) | bandoti | |
* Keep static libs and headers with install * Add logic to generate Config package * Use proper build info * Add llama as import library * Prefix target with package name * Add example project using CMake package * Update README * Update README * Remove trailing whitespace | |||
2023-09-14 | docker : add gpu image CI builds (#3103) | dylan | |
Enables the GPU enabled container images to be built and pushed alongside the CPU containers. Co-authored-by: canardleteer <eris.has.a.dad+github@gmail.com> |