summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-12-01ggml : add ggml_soft_max_ext (#4256)Georgi Gerganov
* metal : implement soft_max_ext * cuda : implement soft_max_ext * ggml : implement soft_max_ext (CPU) * batched-bench : print threads ggml-ci * metal : simplify soft_max encoding ggml-ci * cuda : use 512 threads for soft_max instead of 32 * ggml : update soft max cpu * cuda : do warp-based block reduce * cuda : increase max block size to 1024 * cuda : fix warp reduction initialization of shared mem * metal : warp-based reduction for soft max kernel * metal : warp-based reduce for rms_norm * metal : simplify soft max kernel ggml-ci * alloc : fix build with debug
2023-12-01server : add --log-disable to disable logging to file (#4260)Ziad Ben Hadj-Alouane
* * add --log-disable to disable logging to file in the server example * * typo fix
2023-12-01server : add single-client multi-prompt support (#4232)Ziad Ben Hadj-Alouane
* * add multiprompt support * * cleanup * * more cleanup * * remove atomicity of id_gen, and change lock_guard to unique_lock on completion requests * * remove all references to mutex_multitasks * Update examples/server/server.cpp Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * Update examples/server/server.cpp Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * Update examples/server/server.cpp Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * Update examples/server/server.cpp Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * * change to set --------- Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
2023-12-01make : fix Apple clang determination bug (#4272)WillCorticesAI
Co-authored-by: Will Findley <findley@gmail.com>
2023-12-01build : fix build info generation and cleanup Makefile (#3920)Jared Van Bortel
* cmake : fix joining of REAL_GIT_DIR * fix includes with help from include-what-you-use * make : remove unneeded deps and add test-rope target * fix C includes in C++ source files * Revert "fix includes with help from include-what-you-use" This reverts commit 635e9fadfd516d4604a0fecf4a854bfb25ad17ae.
2023-11-30llava : ShareGPT4V compatibility (vision encoder only loading) (#4172)John
* ShareGPT4 compatibility (vision encoder only loading) Load only a CLIP vision encoder (as supplied by ShareGPT finetunes) Corrects the argument parsing for --img_mean and --img_std (which were previously not parsed but attempted to access) Defines defaults for img_mean and img_std which are equal to the llava 1.5 CLIP encoder, so you do not have to provide them * Update convert-image-encoder-to-gguf.py
2023-11-30main : pass LOG_TEE callback to llama.cpp log (#4033)Andrew Godfrey
* main : Call llama_log_set to use LOG_TEE * tabs to spaces
2023-11-30readme : fix (#4135)vodkaslime
* fix: readme * chore: resolve comments * chore: resolve comments
2023-11-30docker : add finetune option (#4211)Juraj Bednar
2023-11-30batched.swift : update README.md (#4214)Miwa / Ensan
docs: update how to run
2023-11-30cmake : fix the metal file foder path (#4217)Li Tan
2023-11-30readme : fix typo (#4253)Dawid Wysocki
llama.cpp uses GitHub Actions, not Gitlab Actions.
2023-11-30llama : fix alignment of general.name in print meta (#4254)Daniel Bevenius
* llama: fix alignment of general.name in print meta This commit fixes the alignment of the general.name field in the llm_load_print_meta function. Currently the output looks like this: ```console llm_load_print_meta: model ftype = mostly Q4_0 llm_load_print_meta: model params = 13.02 B llm_load_print_meta: model size = 6.86 GiB (4.53 BPW) llm_load_print_meta: general.name = LLaMA v2 ``` And with this commit it looks like this: ```console llm_load_print_meta: model ftype = mostly Q4_0 llm_load_print_meta: model params = 13.02 B llm_load_print_meta: model size = 6.86 GiB (4.53 BPW) llm_load_print_meta: general.name = LLaMA v2 ``` Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com> * llama: fix alignment of special tokens Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com> --------- Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2023-11-30convert.py : fix llama/llama2 conversion due to vocab_size=-1 (#4258)slaren
2023-11-30llama : fix typical sampling (#4261)tarcey
Typical sampling was broken because after copying new_candidates into canditates, the "sorted" bool is left at "true", but the new data is no longer sorted according to probability. Patch to set "sorted" to false. Test: Generating with temp=0.0001 (approx. argmax) should generate the same sequence at typical>=1.0 and typical=0.9999 (approx. disabled, but enters the typical sampling codepath).
2023-11-30py : fix oai proxy (#3972)rhjdvsgsgks
* fix oai proxy fix generation not stoped while bot stop talking in chat mode fix possible `slot_id` not exist response for cors (and pre flight) * oai proxy: workaround for some client (such as Chatbox) * use stop as separator to replace hardcoded `\n`
2023-11-29examples : add readme filesGeorgi Gerganov
2023-11-29readme : add FreeChat (#4248)Peter Sugihara
2023-11-28ggml : restore abort() in GGML_ASSERT (#4242)Jared Van Bortel
2023-11-28ggml : re-enable BLAS for CPU when src0 != F32 + remove redundant full ↵Georgi Gerganov
offload checks in llama.cpp (#4240) * ggml : use blas even if src0 is not F32 * llama : use n_threads_batch only when n_tokens >= 32 ggml-ci * llama : revert n_threads_batch logic ggml-ci
2023-11-27cmake : fix issue with version info not getting baked into LlamaConfig.cmake ↵bandoti
(#3970) * Split CPP generation from build-info query * Remove blank lines * Add BUILD_SHARED_LIBS option
2023-11-27readme : add Amica to UI list (#4230)Kasumi
2023-11-27examples : iOS example with swift ui (#4159)Bailey Chittle
* copy to llama.cpp as subdir * attempt enabling metal, fails * ggml metal compiles! * Update README.md * initial conversion to new format, utf8 errors? * bug fixes, but now has an invalid memory access :( * added O3, now has insufficient memory access * begin sync with master * update to match latest code, new errors * fixed it! * fix for loop conditionals, increase result size * fix current workflow errors * attempt a llama.swiftui workflow * Update .github/workflows/build.yml Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-11-26ggml : fix -Warray-bounds warning with gcc (#4231)Jared Van Bortel
2023-11-26lookahead : support `-n -1` infinite generationGeorgi Gerganov
2023-11-26readme : update hot topicsGeorgi Gerganov
2023-11-26lookahead : add example for lookahead decoding (#4207)Georgi Gerganov
* lookahead : init * lookahead : generate and store n-grams * lookahead : use loop instead recursion to generate n-grams * lookahead : initial working implementation * lookahead : filter repeating n-grams * lookahead : use deterministic init * lookahead : add to Makefile * lookahead : fix a bug in the seq_id of the lookahead tokens * lookahead : add comments --------- Co-authored-by: slaren <slarengh@gmail.com>
2023-11-26metal : fix yarn (#4220)Xiao-Yong Jin
get the correct n_orig_ctx in metal
2023-11-25scripts : Use mmap in torch load (#4202)Galunid
* Use mmap in torch load, prefer .bin files when loading * Revert .bin > .safetensors preference
2023-11-25llama : grammar `reserve` space in `decode_utf8` (#4210)Marcus Dunn
* reserve space for codepoints * improvement for the appended 0
2023-11-25Update docs for yarn_ext_factor <0.0 as unspecified instead of NaN (#4189)crasm
2023-11-25readme : update hot topicsGeorgi Gerganov
2023-11-25server : OAI API compatibility (#4198)Georgi Gerganov
* Add openai-compatible POST /v1/chat/completions API endpoint to server example * fix code style * Update server README.md * Improve server README.md * Fix server.cpp code style according to review * server : some style changes * server : indentation * server : enable special tokens during tokenization by default * server : minor code style * server : change random string generator * straightforward /v1/models endpoint --------- Co-authored-by: kir-gadjello <111190790+kir-gadjello@users.noreply.github.com> Co-authored-by: Tobi Lütke <tobi@Tobis-MacBook-Pro.local>
2023-11-24llama : set metal log callback correctly (#4204)slaren
2023-11-24ggml-cuda : support stablelm rope (#4156)slaren
* ggml-cuda : support stablelm rope * remove unused freq_base kernel parameter * add n_dims parameter to llm_build_k_shift, default to n_rot via overload * llama : fix llm_build_k_shift args --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-11-24convert : fix tensors using grad in some models (#4173)Galunid
2023-11-24main.swift : fix eos checking (#4197)eastriver
llama_token_eos(const struct llama_model *) is currently getting struct llama_context type variable context as a parameter.
2023-11-24readme : use PATH for Windows ROCm (#4195)Aaryaman Vasishta
* Update README.md to use PATH for Windows ROCm * Update README.md * Update README.md
2023-11-23Fix incorrect format strings and uninitialized variables. (#4133)Haohui Mai
* Fix incorrect format strings and uninitialized variables. * Address comments * Add the missing include statement
2023-11-23llama : KV cache view API + better KV cache management (#4170)Georgi Gerganov
* llama : keep track of used KV cells + better KV cache management * llama : zero KV cache used upon clear ggml-ci * llama : allow exporting a view of the KV cache (#4180) * Allow exporting a view of the KV cache * Allow dumping the sequences per cell in common * Track max contiguous cells value and position as well * Fix max contiguous empty cells index calculation Make dump functions deal with lengths or sequences counts > 10 better * Fix off by one error in dump_kv_cache_view * Add doc comments for KV cache view functions Eliminate cell sequence struct; use llama_seq_id directly Minor cleanups * common : add -dkvc arg for enabling kv cache dumps --------- Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
2023-11-23readme : update hot topicsGeorgi Gerganov
2023-11-23examples : fix typo in parallel example doc comment (#4181)Daniel Bevenius
Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2023-11-23docs : add llama-star arch ideaGeorgi Gerganov
2023-11-21stablelm : simplify + speedup generation (#4153)Galunid
2023-11-20finetune - update readme to mention llama support only (#4148)Galunid
2023-11-20readme : update ROCm Windows instructions (#4122)Aaryaman Vasishta
* Update README.md * Update README.md Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> --------- Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
2023-11-20main : Add ChatML functionality to main example (#4046)Seb C
Co-authored-by: Sebastian Cramond <sebby37@users.noreply.github.com>
2023-11-20ci : add flake8 to github actions (python linting) (#4129)Galunid
Disabled rules: * E203 Whitespace before ':' - disabled because we often use 'C' Style where values are aligned * E211 Whitespace before '(' (E211) - disabled because we often use 'C' Style where values are aligned * E221 Multiple spaces before operator - disabled because we often use 'C' Style where values are aligned * E225 Missing whitespace around operator - disabled because it's broken so often it seems like a standard * E231 Missing whitespace after ',', ';', or ':' - disabled because we often use 'C' Style where values are aligned * E241 Multiple spaces after ',' - disabled because we often use 'C' Style where values are aligned * E251 Unexpected spaces around keyword / parameter equals - disabled because it's broken so often it seems like a standard * E261 At least two spaces before inline comment - disabled because it's broken so often it seems like a standard * E266 Too many leading '#' for block comment - sometimes used as "section" separator * E501 Line too long - disabled because it's broken so often it seems like a standard * E701 Multiple statements on one line (colon) - broken only in convert.py when defining abstract methods (we can use# noqa instead) * E704 Multiple statements on one line - broken only in convert.py when defining abstract methods (we can use# noqa instead)
2023-11-20speculative : fix prompt tokenization in speculative example (#4025)Branden Butler
* Support special tokens and not adding BOS to prompt in speculative * Adapt to new should_add_bos function * Ensure tgt and dft have same add_bos setting
2023-11-19Revert "finetune : add --n-gpu-layers flag info to --help (#4128)"Georgi Gerganov
This reverts commit 05e8301e4593e2a67b4bae24f093dd12ce5cc7c2.