Age | Commit message (Collapse) | Author |
|
* Metal support for Swift
* update
* add a toggle for arm/arm64
* set minimum versions for all platforms
* update to use newLibraryWithURL
* bump version
Co-authored-by: Jhen-Jie Hong <iainst0409@gmail.com>
---------
Co-authored-by: Jhen-Jie Hong <iainst0409@gmail.com>
|
|
|
|
|
|
* readme : fix typo
acceleation -> acceleration
* Update README.md
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
* Slightly faster Q3_K and Q5_K on metal
* Another Q3_K speedup on metal
Combined with previous commit, we are now +9.6% for TG.
PP is not affected as this happens via the matrix multiplication
templates.
* Slowly progressing on Q3_K on metal
We are now 13% faster than master
* nother small improvement for Q3_K on metal
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
|
|
This was broken by commit e36ecdcc ("build : on Mac OS enable Metal by
default (#2901)").
|
|
|
|
ggml-ci
|
|
* Do not use _GNU_SOURCE gratuitously.
What is needed to build llama.cpp and examples is availability of
stuff defined in The Open Group Base Specifications Issue 6
(https://pubs.opengroup.org/onlinepubs/009695399/) known also as
Single Unix Specification v3 (SUSv3) or POSIX.1-2001 + XSI extensions,
plus some stuff from BSD that is not specified in POSIX.1.
Well, that was true until NUMA support was added recently,
so enable GNU libc extensions for Linux builds to cover that.
Not having feature test macros in source code gives greater flexibility
to those wanting to reuse it in 3rd party app, as they can build it with
FTMs set by Makefile here or other FTMs depending on their needs.
It builds without issues in Alpine (musl libc), Ubuntu (glibc), MSYS2.
* make : enable Darwin extensions for macOS to expose RLIMIT_MEMLOCK
* make : enable BSD extensions for DragonFlyBSD to expose RLIMIT_MEMLOCK
* make : use BSD-specific FTMs to enable alloca on BSDs
* make : fix OpenBSD build by exposing newer POSIX definitions
* cmake : follow recent FTM improvements from Makefile
|
|
|
|
|
|
|
|
* add cpu hbm support
* add memalign 0 byte check
* Update ggml.c
* Update llama.cpp
* ggml : allow ggml_init with 0 size
* retrigger ci
* fix code style
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
|
|
Co-authored-by: xaedes <xaedes@gmail.com>
|
|
|
|
|
|
|
|
* Parallel RoPE on metal
* PR suggestion
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
|
|
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
* metal : fix kernel_norm
ggml-ci
* metal : put warning in kernel_norm to not combine the loops
* metal : restore original F16 mat-vec multiplication
It works after the norm fixes
* common : don't do warm-up with more than n_batch tokens (close #3058)
ggml-ci
* metal : minor
|
|
* llama : use posix_madvise() instead of madvise() derived from BSD
sed -i 's,\<madvise\>,posix_&,g;s,\<MADV_,POSIX_&,g' llama.cpp
* ggml : use sysconf(_SC_PAGESIZE) instead of getpagesize() derived from BSD
sed -i 's,getpagesize(),sysconf(_SC_PAGESIZE),g' ggml.c
* metal : use sysconf(_SC_PAGESIZE) instead of getpagesize() derived from BSD
sed -i 's,getpagesize(),sysconf(_SC_PAGESIZE),g' ggml-metal.m
|
|
|
|
* convert-llama-ggmlv3-to-gguf: Try to handle files older than GGJTv3
* Better error messages for files that cannot be converted
* Add file type to GGUF output
* Rename to convert-llama-ggml-to-gguf.py
* Include original file type information in description
* Improve some informational output
|
|
|
|
|
|
|
|
* fix implicit int to string conversion
* convert : remove an obsolete pyright comment
---------
Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>
|
|
* Guard against all weights in a super-block being zero
* Also guard against extremely small weights
Closes #2982
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
|
|
|
|
* speculative : add grammar support
* grammars : add json_arr.gbnf
* grammar : add comments to new grammar file
* grammar : remove one nested level
* common : warm-up with 2 tokens - seems to work better
* speculative : print draft token pieces
* speculative : reuse grammar parser + better logs and comments
* speculative : avoid grammar_mem
* make : fix speculative build
|
|
|
|
* build : on Mac OS enable Metal by default
* make : try to fix build on Linux
* make : move targets back to the top
* make : fix target clean
* llama : enable GPU inference by default with Metal
* llama : fix vocab_only logic when GPU is enabled
* common : better `n_gpu_layers` assignment
* readme : update Metal instructions
* make : fix merge conflict remnants
* gitignore : metal
|
|
|
|
|
|
|
|
* editorconfig: add override for the server HTML (which already is 2-space indented)
* server: add a subtle loading animation to the edit box
|
|
* 2x faster (rms) norm cuda kernels
* Fix code style
|
|
* ggml-alloc : use virtual memory for measurement
* compatibility fixes for MAP_ANONYMOUS
* fallback to fixed address for systems without virtual memory
|
|
* speculative : initial example
* speculative : print encoding speed
* speculative : add --draft CLI arg
|
|
|
|
|
|
|
|
|
|
|
|
This restores the generated text to be the same as before #2959
|
|
* update .gitignore
* makefile: add coverage support (lcov, gcovr)
* add code-coverage workflow
* update code coverage workflow
* wun on ubuntu 20.04
* use gcc-8
* check why the job hang
* add env vars
* add LLAMA_CODE_COVERAGE=1 again
* - add CODECOV_TOKEN
- add missing make lcov-report
* install lcov
* update make file -pb flag
* remove unused GGML_NITER from workflows
* wrap coverage output files in COV_TARGETS
|
|
Co-authored-by: Wentai Zhang <wentaizhang@tencent.com>
|
|
* Very minor speedup via simd-group synchronization in f16 x f32
* Another very minor speedup on metal
* Quite significant PP speedup on metal
* Another attempt
* Minor
* Massive improvement for TG for fp16
* ~4-5% improvement for Q8_0 TG on metal
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
|