summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-03-04sync : ggmlGeorgi Gerganov
ggml-ci
2024-03-04ggml : introduce ggml_status (ggml/750)Michael Podvitskiy
* using enum as an exit code instead of macros * update return type from enum to unsigned int * indentation fix * compound update ggml_compute_exit_code -> ggml_status changed ggml_status from a bit-field type to simple codes ggml_status to string cast * ggml_status to string cast * GGML_CALL was removed Co-authored-by: slaren <slarengh@gmail.com> --------- Co-authored-by: slaren <slarengh@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-03-04cmake : handle cases where git index is not found in .git (#5844)Dane Madsen
* Update CMakeLists.txt * Update CMakeLists.txt
2024-03-04speculative : implement stochastic speculative sampling (#5625)Minsoo Cheong
* (WIP) Implement stochastic speculative decoding * sample from residual distribution on draft accept failure * fix #5657: force greedy sampling with probs when temp is 0 * remove p_accept parameter * fix style * remove unused variables * add srand() in speculative.cpp * replace use of rand() with mt19937 sampling * fixes based on review (@JohannesGaessler) * fix r random generation * randomly select next sequence to verify + fix bug in memory freeing * fix bug in active_seqs sync * fix uniform int distribution initialization * remove warnings from comparison between int and size_t * check grammar in `llama_sample_probability_distribution_impl` * remove malloc code by utilizing vectors * add PR link to README
2024-03-04add alias for chat template (#5858)Xuan Son Nguyen
2024-03-04sync : ggmlGeorgi Gerganov
2024-03-04add some new ops, fix some operators and add batch operations to certain ↵leejet
operators. (ggml/747) * cuda: fix group_norm * cuda: add batch inference support for ggml_pad/ggml_upscale * add ggml_arrange * add ggml_timestep_embedding * update ggml_arange/ggml_timestep_embedding tests * cuda: fix im2col * add ggml_arange/ggml_timestep_embbeding support for metal backend * fix some bugs * fix some bugs * Update ggml.h Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Update ggml-cuda.cu Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Update ggml-metal.m Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Update ggml-metal.m Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Update ggml-metal.metal Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * modify according to the review comments * ggml : fix compile warnings + code style * ggml : normalize compute_forward calls + fix seg fault in debug * minor --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> Co-authored-by: slaren <slarengh@gmail.com>
2024-03-04common : use LLAMA_DEFAULT_SEED (#5855)DAN™
2024-03-04main : support special tokens as reverse/anti prompt (#5847)DAN™
* Support special tokens as reverse/anti prompt. * Tokenize antiprompts only once. * main : minor --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-03-03cuda : fix data race in soft max (#5853)slaren
2024-03-03readme : add API changes sectionGeorgi Gerganov
2024-03-03llama : allow for user specified embedding pooling type (#5849)Douglas Hanley
* allow for user specified pooling type * llama : use enum types over int --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-03-03gguf-dump : support i-quants (#5841)Nindaleth
Co-authored-by: Black_Fox <radekliska@gmail.com>
2024-03-03llama : fix llama_copy_state_data with fragmented KV cache (#5840)compilade
The row size of the saved states was based on kv_self.head while it should be based on llama_kv_cache_cell_max. Existing session files should still work. * llama : fix llama_kv_cache_cell_max inability to return 1 I've also changed its return type to uint32_t, because this function is always used to set the value of uint32_t variables, and because the index already has this type. * llama : fix state size calculation Some bytes in the state were unaccounted for in llama_get_state_size. Since the logits reserve so much space, it did not cause problems.
2024-03-03ci : schedule slow server tests only on Release or on demand (#5839)Pierrick Hymbert
2024-03-03server : init http requests thread pool with --parallel if set (#5836)Pierrick Hymbert
2024-03-02flake.lock: Update (#5842)Georgi Gerganov
Flake lock file updates: • Updated input 'flake-parts': 'github:hercules-ci/flake-parts/b253292d9c0a5ead9bc98c4e9a26c6312e27d69f' (2024-02-01) → 'github:hercules-ci/flake-parts/f7b3c975cf067e56e7cda6cb098ebe3fb4d74ca2' (2024-03-01) • Updated input 'flake-parts/nixpkgs-lib': 'github:NixOS/nixpkgs/97b17f32362e475016f942bbdfda4a4a72a8a652?dir=lib' (2024-01-29) → 'github:NixOS/nixpkgs/1536926ef5621b09bba54035ae2bb6d806d72ac8?dir=lib' (2024-02-29) • Updated input 'nixpkgs': 'github:NixOS/nixpkgs/cbc4211f0afffe6dfd2478a62615dd5175a13f9a' (2024-02-23) → 'github:NixOS/nixpkgs/1536926ef5621b09bba54035ae2bb6d806d72ac8' (2024-02-29) Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-03-02server: tests: passkey challenge / self-extend with context shift demo (#5832)Pierrick Hymbert
* server: tests: add models endpoint scenario * server: /v1/models add some metadata * server: tests: add debug field in context before scenario * server: tests: download model from HF, add batch size * server: tests: add passkey test * server: tests: add group attention params * server: do not truncate prompt tokens if self-extend through group attention is enabled * server: logs: do not truncate log values * server: tests - passkey - first good working value of nga * server: tests: fix server timeout * server: tests: fix passkey, add doc, fix regex content matching, fix timeout * server: tests: fix regex content matching * server: tests: schedule slow tests on master * server: metrics: fix when no prompt processed * server: tests: self-extend add llama-2-7B and Mixtral-8x7B-v0.1 * server: tests: increase timeout for completion * server: tests: keep only the PHI-2 test * server: tests: passkey add a negative test
2024-03-02llama : add abort_callback to interrupt computation (#5409)Michael Podvitskiy
* using abort_callback from ggml to stop llama computation * format fix * a brief explaining comment --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-03-02ggml : fix IQ3_S AVX implementation (#5834)Georgi Gerganov
ggml-ci
2024-03-02convert : automatically fall back to HfVocab if tokenizer.model doesn't ↵Jared Van Bortel
exist (#5821)
2024-03-02convert-hf : make model class definitions self-contained (#5825)Jared Van Bortel
2024-03-02ggml : IQ3_S improvements (#5829)Kawrakow
* iq3_s: somewhat faster AVX2 dot product On Ryzen a 7950X TG-128 increases to 16 t/s from 15.5 t/s using 16 threads. For 8 threads it is 13.85 t/s vs 11.75 t/s. PP-512 increases to 28.5 t/s from 23.8 t/s. * iq3_s: somewhat faster ARM_NEON dot product Still dog slow - 10.7 t/s up from 9.9 t/s. * iq3_s: another small ARM_NEON improvement 10.7 -> 11.0 t/s. Using vmulq_s8 is faster than the xor - sub trick that works best on AVX2. * iq3_s: minor improvement on Metal 49.4 t/s -> 50.3 t/s * iq3_s: PPL improvement E.g., for a context of 4096 LLaMA-v2-7B goes to 5.1340 from 5.1653. * iq3_s: use new grid everywhere * Fix ARM_NEON --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-03-02scripts : add pod-llama.shGeorgi Gerganov
2024-03-02llama : refactor internal quantization functions (#5830)Xuan Son Nguyen
2024-03-02llama : fix segfault from unknown model arch name (#5820)compilade
* llama : fix segfault from unknown model arch name * llama : make all LLM maps const This also requires using `std::map::at` instead of its `operator[]` which does not exist for const maps. * llama : name LLM_ARCH_UNKNOWN to "(unknown)" This avoids errors from `std::map::at` when getting the general name of the model architecture. Using "(unknown)" instead of an empty string as per suggestion https://github.com/ggerganov/llama.cpp/pull/5820#issuecomment-1973735284 * llama : remove redundant inner const for LLM_TENSOR_NAMES The extra const won't do anything here as const maps return const references to values. Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * llama : remove redundant nullptr check in llm_arch_from_string Since LLM_ARCH_NAMES is a const map, no spurious elements with a NULL name are inserted anymore, so this check is dead code. --------- Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
2024-03-02Support multiple GPUs (split mode) on SYCL backend (#5806)Neo Zhang Jianyu
* suport multiple cards: split-mode - layer|row * rm warning * rebase with master, support tow new OPs, close feature for -sm=row, fix for unit test * update news * fix merge error * update according to review comments
2024-03-02workflows : remove nocleanup arg for check-requirements.sh (#5826)crasm
Reduces peak tmpfs usage and should prevent the check from failing from running out of space. Fixes the 'No space left on device' issue mentioned in #5703.
2024-03-01build(nix): Introduce flake.formatter for `nix fmt` (#5687)Tushar
* build(nix): Introduce flake.formatter for `nix fmt` * chore: Switch to pkgs.nixfmt-rfc-style
2024-03-01convert-hf-to-gguf : require einops for InternLM2ForCausalLM (#5792)nold
2024-03-01llama : add StarCoder2 support (#5795)Sourab Mangrulkar
* Add support for starcoder2 * handle rope type * skip rope freq and rotary embeddings from being serialized * resolve comments * Update llama.cpp * remove redundant changes * handle `rope-theta` * llama : change starcoder2 rope type * address comment --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-03-01server : remove api_like_OAI.py proxy script (#5808)Georgi Gerganov
2024-03-01ggml-vulkan: fix VULKAN_CHECK_RESULTS flag, which was previously broken (#5813)ddpasa
2024-03-01gemma : fix bfloat16 -> float16 conversion issue (#5810)kunal-vaishnavi
2024-03-01common : fix flag `--logits-all` to `--all-logits` (#5805)Miwa / Ensan
2024-03-01llama : cleanup unused mmq flags (#5772)Pierrick Hymbert
* cleanup unused --no-mul-mat-q,-nommq, -mmq, --mul-mat-q, mul_mat_q * remove: mul_mat_q in compare llama bench and usage * update llama-bench --------- Co-authored-by: slaren <slarengh@gmail.com>
2024-03-01unicode : switch to multimap based nfd_map (#5799)Douglas Hanley
* switch to multimap based nfd_map due to compile time issues * simplify multimap keys * dont construct new locale every time
2024-03-01server: allow to override threads server pool with --threads-http (#5794)Pierrick Hymbert
2024-03-01ci : add Ubuntu 22 Vulkan CI run (#5789)Eve
2024-03-01server : fix newlines in help (#5785)Georgi Gerganov
2024-03-01[SYCL] Use batched mul_mat pathway (#5591)AidanBeltonS
* Use batched mul_mat pathway * rm extra line * Explicitly state scaled data type --------- Co-authored-by: Abhilash Majumder <30946547+abhilash1910@users.noreply.github.com>
2024-02-29Server: normalize naming (#5779)Xuan Son Nguyen
* server: normalize naming * fix spacing
2024-02-29llama : constified `llama_set_state_data`'s `src` (#5774)Marcus Dunn
2024-02-28ci : reduce 3b ppl chunks to 1 to avoid timeout (#5771)Georgi Gerganov
ggml-ci
2024-02-28make portability_enumeration_ext apple only (#5757)Eve
2024-02-28llama : remove deprecated API (#5770)Georgi Gerganov
ggml-ci
2024-02-28awq-py : remove (#5768)Georgi Gerganov
2024-02-28sync : ggmlGeorgi Gerganov
2024-02-28add google magika inference example (ggml/748)slaren
* add magika inference example * ggml : fix unaligned accesses in custom ops * ggml : fix FP32 GELU for values that exceed the FP16 range * use ggml_pool_1d * add README * Update README.md * pad inputs if the files are too small * cleanup ggml-ci
2024-02-28Introduce backend GUIDs (ggml/743)UEXTM.com
* Introduce backend GUIDs Initial proposed implementation of backend GUIDs (Discussed in https://github.com/ggerganov/ggml/pull/741) Hardcoded CPU backend GUID (for now) Change ggml_backend_is_cpu logic to use GUID * Remove redundant functions Remove redundant functions `ggml_backend_i::get_name` and `ggml_backend_guid` which are not desired for future expansion * Add spaces to match style Co-authored-by: slaren <slarengh@gmail.com> * Fix brace style to match Co-authored-by: slaren <slarengh@gmail.com> * Add void to () in function signature Co-authored-by: slaren <slarengh@gmail.com> * Add back ggml_backend_guid and make CPU_GUID a local static in ggml_backend_cpu_guid * add guids to all backends ggml-ci --------- Co-authored-by: slaren <slarengh@gmail.com>