summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-04-12Correct free memory and total memory. (#6630)MasterYi1024
Co-authored-by: MasterYi <zouxiaoyi@kylinos.cn>
2024-04-12eval-callback: use ggml_op_desc to pretty print unary operator name (#6631)Pierrick Hymbert
2024-04-12ci : disable Metal for macOS-latest-cmake-x64 (#6628)Georgi Gerganov
2024-04-11Optimization: eliminate addition of redundant stacks when advancing grammar. ↵Clint Herron
(#6616)
2024-04-11As suggested by @slaren, disabling Metal for test to fix CI build on OSX ↵Clint Herron
from #6576 (#6619)
2024-04-11Refactor Error Handling for CUDA (#6575)Nikolas
* Refactor Error Handling for CUDA Add guidance for setting CUDA_DOCKER_ARCH to match GPU compute capability for CUDA versions < 11.7. Include link to NVIDIA's CUDA GPUs documentation for compute capability reference. * Update Makefile Improved wording Co-authored-by: Johannes Gäßler <johannesg@5d6.de> --------- Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2024-04-11grammars: 1.5x faster inference w/ complex grammars (vector reserves / ↵Olivier Chafik
reuses) (#6609) * grammars: reserve rejects & next candidates * grammars: reuse new_stacks * grammars: fix missing sig change in llama.h * grammars: fix test (api changed) * grammars: update gbnf-validator.cpp * grammars: simpler syntax (no swap)
2024-04-11ci: download artifacts to release directory (#6612)Hugo Roussel
When action download-artifact was updated to v4, the default download path changed. This fix binaries not being uploaded to releases.
2024-04-11scripts : add --outdir option to hf.sh (#6600)Daniel Bevenius
* scripts : add --outdir option to hf.sh This commit adds an option to the hf.sh script that allows the user to specify an output directory for the downloaded file. The motivation for this changes is that examples that use the hf.sh script to download models from huggingface can now specify the output directory, perhaps to the `models` directory to keep them in one place and not clutter the root directory. Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com> * squash! scripts : add --outdir option to hf.sh Fix format of the --outdir option in the usage message. Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com> --------- Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-04-11eval-callback: Example how to use eval callback for debugging (#6576)Pierrick Hymbert
* gguf-debug: Example how to use ggml callback for debugging * gguf-debug: no mutex, verify type, fix stride. * llama: cv eval: move cb eval field in common gpt_params * ggml_debug: use common gpt_params to pass cb eval. Fix get tensor SIGV random. * ggml_debug: ci: add tests * ggml_debug: EOL in CMakeLists.txt * ggml_debug: Remove unused param n_batch, no batching here * ggml_debug: fix trailing spaces * ggml_debug: fix trailing spaces * common: fix cb_eval and user data not initialized * ci: build revert label * ggml_debug: add main test label * doc: add a model: add a link to ggml-debug * ggml-debug: add to make toolchain * ggml-debug: tests add the main label * ggml-debug: ci add test curl label * common: allow the warmup to be disabled in llama_init_from_gpt_params * ci: add curl test * ggml-debug: better tensor type support * gitignore : ggml-debug * ggml-debug: printing also the sum of each tensor * ggml-debug: remove block size * eval-callback: renamed from ggml-debug * eval-callback: fix make toolchain --------- Co-authored-by: slaren <slarengh@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-10gguf : add option to not check tensor data (#6582)Daniel Bevenius
This commit adds an option to the gguf example to not check the tensor data. The motivation for this is that it can be nice to use the gguf tool to read other .gguf files that were not created by the gguf tool. Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-04-10minor layout improvements (#6572)Ralph Soika
* minor layout improvements * added missing file, run deps.sh locally
2024-04-10llama : add model types for mixtral (#6589)slaren
2024-04-10convert.py : add consolidated.safetensors for mixtral 8x22b (#6587)slaren
2024-04-10docs : how to add a model (#6565)Pierrick Hymbert
* docs: how to add a model * docs: model: typo and docs * docs: model: add prevision on RoPE * docs: model: rephrasing README.md * docs: model: rephrasing README.md * docs: model: README.md fix trailing spaces * docs : some fixes * Update README.md --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-10readme : fix ROCm link (#6579)Artem Zinnatullin
2024-04-10readme : update UI list (#6560)sjxx
2024-04-10readme: fix typo in amdgpu target name (#6573)Jiří Sejkora
2024-04-09BERT tokenizer fixes (#6498)Jared Van Bortel
Key changes: * BERT conversion: fix abuse of LlamaHfVocab, do not set BOS or EOS * Nomic Embed conversion: pad vocab instead of slicing embedding tensor * llama_tokenize: handle added special tokens like HF does
2024-04-09sync : ggmlGeorgi Gerganov
2024-04-09server : detect search query to start webchat (#6554)Ed Lee
2024-04-09llama : add Command R Plus support (#6491)Carolinabanana
* Add Command R Plus GGUF * Add Command R Plus GGUF * Loading works up to LayerNorm2D * Export new tensors in 1D so they are not quantized. * Fix embedding layer based on Noeda's example * Whitespace * Add line * Fix unexpected tokens on MPS. Re-add F16 fix. ((Noeda) * dranger003: Fix block index overflow in CUDA dequantizing. * Reverted blocked multiplication code as it still has issues and could affect other Llama arches * export norms as f32 * fix overflow issues during quant and other cleanup * Type convention Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * dranger003: Fix more int overflow during quant. --------- Co-authored-by: S <seast@Ss-Mac-Studio.local> Co-authored-by: S <s@example.com> Co-authored-by: slaren <slarengh@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-09license : update copyright notice + add AUTHORS (#6405)Georgi Gerganov
* license : add AUTHORS * authors : update * scipts : add LICENSE and gen-authors.sh to sync
2024-04-08llama : fix attention layer count sanity check (#6550)Georgi Gerganov
* llama : fix attention layer count sanity check * llama : fix parentheses in attention layer count sanity check There was otherwise a warning when compiling. --------- Co-authored-by: Francis Couture-Harpin <git@compilade.net>
2024-04-08Comment explaining a decision (#6531)kunnis
2024-04-08quantize : fix precedence of cli args (#6541)Georgi Gerganov
2024-04-08llama : support negative ith in llama_get_ API (#6519)Rick G
* llama_sampling_sample with default args is more naively usable * Batches populated by either llama_batch_get_one or llama_batch_add work with default args * Previously get_one could use the default argument * Previously add should usually have used the last index where logits[idx] == true * This hopefully encourages the use of llama_batch_add * By giving expected results when using default arguments. * Adds "negative indexing" feature to llama_get_logits_ith and llama_get_embeddings_ith * Believed to work with any currently well behaved program * Default arg now works for both cases (previously would give strange results for add case) * Any non-negative number is unaffected and behaves as previously * Negative arguments were previously invalid. * Implemented as a special case of indexing as suggested by @compilade in https://github.com/ggerganov/llama.cpp/pull/6519 * Fixed mismatch type errors * cited in macOS CI tests * Missed in original updates based on PR feedback in https://github.com/ggerganov/llama.cpp/pull/6519
2024-04-08llama : save and restore kv cache for single seq id (#6341)Jan Boon
* llama : save and restore kv cache for single seq id * remove trailing whitespace * respond error in case there's no space in the kv cache * add kv seq save restore to test case * add --slot-save-path arg to enable save restore and restrict save location * Returning 0 for some cases, instead of asserting. * cleanup error cases * rename sequence state functions * rename state get set functions * add previous function names back in with DEPRECATED notice * update doc * adjust endpoints to preferred style * fix restoring zero cell count * handle seq rm return value * unused param * keep in the size check * fix return types * add server test case for slot save restore * cleanup * add cake * cleanup style * add special * removing a whole sequence never fails * move sequence state file functionality from server to llama to match session api and add version tags * catch exceptions on save as well * error log messages * check types for stricter restore * update server doc * readme : update API changes date * strict filename validation * move include, reject bom as well * also reject empty filename * reject whitespace and trailing dot --------- Co-authored-by: Martin Evans <martindevans@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-08remove row=1 cond (#6532)Abhilash Majumder
2024-04-08Adding KodiBot to UI list (#6535)Firat
KodiBot is free and open source ai chat app released under the GNU General Public License.
2024-04-07Change Windows AMD example to release build to make inference much faster. ↵Mark Fairbairn
(#6525)
2024-04-07flake.lock: Update (#6517)Georgi Gerganov
Flake lock file updates: • Updated input 'flake-parts': 'github:hercules-ci/flake-parts/f7b3c975cf067e56e7cda6cb098ebe3fb4d74ca2' (2024-03-01) → 'github:hercules-ci/flake-parts/9126214d0a59633752a136528f5f3b9aa8565b7d' (2024-04-01) • Updated input 'flake-parts/nixpkgs-lib': 'github:NixOS/nixpkgs/1536926ef5621b09bba54035ae2bb6d806d72ac8?dir=lib' (2024-02-29) → 'github:NixOS/nixpkgs/d8fe5e6c92d0d190646fb9f1056741a229980089?dir=lib' (2024-03-29) • Updated input 'nixpkgs': 'github:NixOS/nixpkgs/d8fe5e6c92d0d190646fb9f1056741a229980089' (2024-03-29) → 'github:NixOS/nixpkgs/fd281bd6b7d3e32ddfa399853946f782553163b5' (2024-04-03) Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-04-07Add GritLM as supported models. (#6513)DAN™
2024-04-07sync : ggmlGeorgi Gerganov
2024-04-07ggml: bypass code incompatible with CUDA < 11.1 (whisper/2020)Slava Primenko
`cudaHostRegisterReadOnly` parameter was only introduced in CUDA 11.1 See this issue for more details: https://github.com/ggerganov/examples/whisper/whisper.cpp/issues/2007
2024-04-07scripts : sync ggml-cuda folderGeorgi Gerganov
2024-04-07Run make to build the project (#6457)limitedAtonement
2024-04-07support/fix OPs GGML_TYPE_IQ4_NL, GGML_TYPE_IQ4_XS, GGML_TYPE_IQ3_XXS, ↵Neo Zhang Jianyu
GGML_TYPE_IQ3_S, GGML_TYPE_IQ2_XXS, GGML_TYPE_IQ2_XS, GGML_TYPE_IQ2_S, GGML_TYPE_IQ1_S, GGML_TYPE_IQ1_M (#6521)
2024-04-06sync : ggmlGeorgi Gerganov
2024-04-06backend : fix typo in scheduler documentation (ggml/781)Daniel Bevenius
Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-04-06Tests: Added integration tests for GBNF parser (#6472)Clint Herron
* Added integration tests for GBNF parser to validate correctness of parsing, as well as correctness of string matching. Intended for use to pin behavior while working on performance improvements. * Fixing whitespace errors and cleaning error message alert to be clearer. * Removing hacky include to llama.cpp from grammar integration test now that needed functions are available via internal API. * Comment cleanup. * Reorganizing tests for readability. * Cleaning up debug message to make a bit more sense.
2024-04-06ci: bench: support sse and fix prompt processing time / server: add tokens ↵Pierrick Hymbert
usage in stream OAI response (#6495) * ci: bench: support sse and fix prompt processing time server: add tokens usage in stream mode * ci: bench: README.md EOL * ci: bench: remove total pp and tg as it is not accurate * ci: bench: fix case when there is no token generated * ci: bench: change to the 95 percentile for pp and tg as it is closer to what the server exports in metrics * ci: bench: fix finish reason rate
2024-04-05gguf.py : add licence and version to gguf writer (#6504)Brian
2024-04-05readme : update UI list (#6503)Hoang Nguyen
* Add MindMac to UI list * Update proprietary description Co-authored-by: slaren <slarengh@gmail.com> --------- Co-authored-by: slaren <slarengh@gmail.com>
2024-04-05bench : make n_batch and n_ubatch configurable in Batched bench (#6500)Ting Sun
* bench: make n_batch and n_ubatch configurable * bench: update doc for batched bench
2024-04-05[SYCL] Fixed minor bug when enabling FP16 for non intel targets (#6464)Ouadie EL FAROUKI
* moved INTEL_MKL guard from gemm_impl to gemm (wrapper) * Update ggml-sycl.cpp Co-authored-by: AidanBeltonS <87009434+AidanBeltonS@users.noreply.github.com> --------- Co-authored-by: AidanBeltonS <87009434+AidanBeltonS@users.noreply.github.com>
2024-04-04readme : add Dot to UI list (#6487)alexpinel
2024-04-04readme : fix typo (#6481)Jun Jie
2024-04-04server: add cURL support to server Dockerfiles (#6474)Ed Lepedus
* server: add cURL support to `full.Dockerfile` * server: add cURL support to `full-cuda.Dockerfile` and `server-cuda.Dockerfile` * server: add cURL support to `full-rocm.Dockerfile` and `server-rocm.Dockerfile` * server: add cURL support to `server-intel.Dockerfile` * server: add cURL support to `server-vulkan.Dockerfile` * fix typo in `server-vulkan.Dockerfile` Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-04ci: exempt master branch workflows from getting cancelled (#6486)Minsoo Cheong
* ci: exempt master branch workflows from getting cancelled * apply to bench.yml