summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-09-11CUDA: lower GPU latency + fix Windows performance (#3110)Johannes Gäßler
2023-09-11cmake : support build for iOS/tvOS (#3116)Jhen-Jie Hong
* cmake : support build for iOS/tvOS * ci : add iOS/tvOS build into macOS-latest-cmake * ci : split ios/tvos jobs
2023-09-11CUDA: add device number to error messages (#3112)Johannes Gäßler
2023-09-11metal : PP speedup (#3084)Kawrakow
* Minor speed gains for all quantization types * metal: faster kernel_scale via float4 * Various other speedups for "small" kernels * metal: faster soft_max vial float4 * metal: faster diagonal infinity Although, to me it looks like one should simply fuse scale + diagnonal infinity + soft_max on the KQtensor. * Another faster f16 x f32 matrix multiply kernel * Reverting the diag infinity change It does work for PP, but somehow it fails for TG. Need to look more into it. * metal: add back faster diagonal infinity This time more carefully * metal : minor (readibility) --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-10convert: remove most of the n_mult usage in convert.py (#3098)Erik Scholz
2023-09-09metal : support for Swift (#3078)kchro3
* Metal support for Swift * update * add a toggle for arm/arm64 * set minimum versions for all platforms * update to use newLibraryWithURL * bump version Co-authored-by: Jhen-Jie Hong <iainst0409@gmail.com> --------- Co-authored-by: Jhen-Jie Hong <iainst0409@gmail.com>
2023-09-09metal : support build for iOS/tvOS (#3089)Jhen-Jie Hong
2023-09-08flake : add train-text-from-scratch to flake.nix (#3042)takov751
2023-09-08readme : fix typo (#3043)Ikko Eltociear Ashimine
* readme : fix typo acceleation -> acceleration * Update README.md --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-08metal : Q3_K speedup (#2995)Kawrakow
* Slightly faster Q3_K and Q5_K on metal * Another Q3_K speedup on metal Combined with previous commit, we are now +9.6% for TG. PP is not affected as this happens via the matrix multiplication templates. * Slowly progressing on Q3_K on metal We are now 13% faster than master * nother small improvement for Q3_K on metal --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-09-08examples : make n_ctx warning work again (#3066)Cebtenzzre
This was broken by commit e36ecdcc ("build : on Mac OS enable Metal by default (#2901)").
2023-09-08readme : update hot tpoicsGeorgi Gerganov
2023-09-08sync : ggml (CUDA GLM RoPE + POSIX) (#3082)Georgi Gerganov
ggml-ci
2023-09-08build : do not use _GNU_SOURCE gratuitously (#2035)Przemysław Pawełczyk
* Do not use _GNU_SOURCE gratuitously. What is needed to build llama.cpp and examples is availability of stuff defined in The Open Group Base Specifications Issue 6 (https://pubs.opengroup.org/onlinepubs/009695399/) known also as Single Unix Specification v3 (SUSv3) or POSIX.1-2001 + XSI extensions, plus some stuff from BSD that is not specified in POSIX.1. Well, that was true until NUMA support was added recently, so enable GNU libc extensions for Linux builds to cover that. Not having feature test macros in source code gives greater flexibility to those wanting to reuse it in 3rd party app, as they can build it with FTMs set by Makefile here or other FTMs depending on their needs. It builds without issues in Alpine (musl libc), Ubuntu (glibc), MSYS2. * make : enable Darwin extensions for macOS to expose RLIMIT_MEMLOCK * make : enable BSD extensions for DragonFlyBSD to expose RLIMIT_MEMLOCK * make : use BSD-specific FTMs to enable alloca on BSDs * make : fix OpenBSD build by exposing newer POSIX definitions * cmake : follow recent FTM improvements from Makefile
2023-09-08docker : add git to full-cuda.Dockerfile main-cuda.Dockerfile (#3044)hongbo.mo
2023-09-08Update deprecated GGML TheBloke links to GGUF (#3079)Yui
2023-09-08ggml-alloc : correctly check mmap return value for errors (#3075)slaren
2023-09-08enable CPU HBM (#2603)Kunshang Ji
* add cpu hbm support * add memalign 0 byte check * Update ggml.c * Update llama.cpp * ggml : allow ggml_init with 0 size * retrigger ci * fix code style --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-07convert : fix F32 ftype not being saved (#3048)Cebtenzzre
2023-09-07fix some warnings from gcc and clang-tidy (#3038)Cebtenzzre
Co-authored-by: xaedes <xaedes@gmail.com>
2023-09-07make : improve test target (#3031)Cebtenzzre
2023-09-07make : fix CPPFLAGS (#3035)Cebtenzzre
2023-09-07llama-bench : use two tokens in the warmup run for prompt evals (#3059)slaren
2023-09-07metal : parallel RoPE on Metal (#3024)Kawrakow
* Parallel RoPE on metal * PR suggestion --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-09-07metal : correct fix of kernel_norm (#3060)Kawrakow
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-07metal : fix kernel_norm (fixes Falcon on Metal) (#3057)Georgi Gerganov
* metal : fix kernel_norm ggml-ci * metal : put warning in kernel_norm to not combine the loops * metal : restore original F16 mat-vec multiplication It works after the norm fixes * common : don't do warm-up with more than n_batch tokens (close #3058) ggml-ci * metal : minor
2023-09-07ggml : posixify madvise and pagesize (#3037)Przemysław Pawełczyk
* llama : use posix_madvise() instead of madvise() derived from BSD sed -i 's,\<madvise\>,posix_&,g;s,\<MADV_,POSIX_&,g' llama.cpp * ggml : use sysconf(_SC_PAGESIZE) instead of getpagesize() derived from BSD sed -i 's,getpagesize(),sysconf(_SC_PAGESIZE),g' ggml.c * metal : use sysconf(_SC_PAGESIZE) instead of getpagesize() derived from BSD sed -i 's,getpagesize(),sysconf(_SC_PAGESIZE),g' ggml-metal.m
2023-09-06k-quants : fix zero-weight guard in Q6_K (ref #3040)Georgi Gerganov
2023-09-06convert-llama-ggml-to-gguf: Try to handle files older than GGJTv3 (#3023)Kerfuffle
* convert-llama-ggmlv3-to-gguf: Try to handle files older than GGJTv3 * Better error messages for files that cannot be converted * Add file type to GGUF output * Rename to convert-llama-ggml-to-gguf.py * Include original file type information in description * Improve some informational output
2023-09-05build : add LLAMA_METAL_NDEBUG flag (#3033)Cebtenzzre
2023-09-05make : use new flag variables for recent changes (#3019)Cebtenzzre
2023-09-05examples : replace fprintf to stdout with printf (#3017)Cebtenzzre
2023-09-05convert: fix convert.py not working with int filename_stem (#3028)Erik Scholz
* fix implicit int to string conversion * convert : remove an obsolete pyright comment --------- Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>
2023-09-05Guard against all weights in a super-block being zero (#3010)Kawrakow
* Guard against all weights in a super-block being zero * Also guard against extremely small weights Closes #2982 --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-09-05llama : update logic for number of threads when using BLASGeorgi Gerganov
2023-09-05speculative : add grammar support (#2991)Georgi Gerganov
* speculative : add grammar support * grammars : add json_arr.gbnf * grammar : add comments to new grammar file * grammar : remove one nested level * common : warm-up with 2 tokens - seems to work better * speculative : print draft token pieces * speculative : reuse grammar parser + better logs and comments * speculative : avoid grammar_mem * make : fix speculative build
2023-09-04py : minorGeorgi Gerganov
2023-09-04build : on Mac OS enable Metal by default (#2901)Georgi Gerganov
* build : on Mac OS enable Metal by default * make : try to fix build on Linux * make : move targets back to the top * make : fix target clean * llama : enable GPU inference by default with Metal * llama : fix vocab_only logic when GPU is enabled * common : better `n_gpu_layers` assignment * readme : update Metal instructions * make : fix merge conflict remnants * gitignore : metal
2023-09-04ggml-opencl : store GPU buffer in ggml_tensor::extra (#2994)slaren
2023-09-04llama-bench : make cpp file non-executable (#2999)Cebtenzzre
2023-09-04make : add speculative example (#3003)Leng Yue
2023-09-04server : add a subtle loading animation to the edit box (#2466)Aarni Koskela
* editorconfig: add override for the server HTML (which already is 2-space indented) * server: add a subtle loading animation to the edit box
2023-09-042x faster (rms) norm cuda kernels (3.7% e2e improvement) (#2985)Jiahao Li
* 2x faster (rms) norm cuda kernels * Fix code style
2023-09-03ggml-alloc : use virtual memory for measurement (#2973)slaren
* ggml-alloc : use virtual memory for measurement * compatibility fixes for MAP_ANONYMOUS * fallback to fixed address for systems without virtual memory
2023-09-03speculative : PoC for speeding-up inference via speculative sampling (#2926)Georgi Gerganov
* speculative : initial example * speculative : print encoding speed * speculative : add --draft CLI arg
2023-09-03perplexity : fix ETA by warming up the model with an empty runGeorgi Gerganov
2023-09-03gguf(python): Fix special vocab handling when id < 0 (#2984)Kerfuffle
2023-09-03metal : restore 363f0bf and fix reduce in F16_F32 kernels (#2986)Georgi Gerganov
2023-09-03cov : disable comment in PRs (#2989)Alon
2023-09-03llama : fix bpe tokenize from byte (#2889)opparco