summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-11-25scripts : Use mmap in torch load (#4202)Galunid
* Use mmap in torch load, prefer .bin files when loading * Revert .bin > .safetensors preference
2023-11-25llama : grammar `reserve` space in `decode_utf8` (#4210)Marcus Dunn
* reserve space for codepoints * improvement for the appended 0
2023-11-25Update docs for yarn_ext_factor <0.0 as unspecified instead of NaN (#4189)crasm
2023-11-25readme : update hot topicsGeorgi Gerganov
2023-11-25server : OAI API compatibility (#4198)Georgi Gerganov
* Add openai-compatible POST /v1/chat/completions API endpoint to server example * fix code style * Update server README.md * Improve server README.md * Fix server.cpp code style according to review * server : some style changes * server : indentation * server : enable special tokens during tokenization by default * server : minor code style * server : change random string generator * straightforward /v1/models endpoint --------- Co-authored-by: kir-gadjello <111190790+kir-gadjello@users.noreply.github.com> Co-authored-by: Tobi Lütke <tobi@Tobis-MacBook-Pro.local>
2023-11-24llama : set metal log callback correctly (#4204)slaren
2023-11-24ggml-cuda : support stablelm rope (#4156)slaren
* ggml-cuda : support stablelm rope * remove unused freq_base kernel parameter * add n_dims parameter to llm_build_k_shift, default to n_rot via overload * llama : fix llm_build_k_shift args --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-11-24convert : fix tensors using grad in some models (#4173)Galunid
2023-11-24main.swift : fix eos checking (#4197)eastriver
llama_token_eos(const struct llama_model *) is currently getting struct llama_context type variable context as a parameter.
2023-11-24readme : use PATH for Windows ROCm (#4195)Aaryaman Vasishta
* Update README.md to use PATH for Windows ROCm * Update README.md * Update README.md
2023-11-23Fix incorrect format strings and uninitialized variables. (#4133)Haohui Mai
* Fix incorrect format strings and uninitialized variables. * Address comments * Add the missing include statement
2023-11-23llama : KV cache view API + better KV cache management (#4170)Georgi Gerganov
* llama : keep track of used KV cells + better KV cache management * llama : zero KV cache used upon clear ggml-ci * llama : allow exporting a view of the KV cache (#4180) * Allow exporting a view of the KV cache * Allow dumping the sequences per cell in common * Track max contiguous cells value and position as well * Fix max contiguous empty cells index calculation Make dump functions deal with lengths or sequences counts > 10 better * Fix off by one error in dump_kv_cache_view * Add doc comments for KV cache view functions Eliminate cell sequence struct; use llama_seq_id directly Minor cleanups * common : add -dkvc arg for enabling kv cache dumps --------- Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
2023-11-23readme : update hot topicsGeorgi Gerganov
2023-11-23examples : fix typo in parallel example doc comment (#4181)Daniel Bevenius
Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2023-11-23docs : add llama-star arch ideaGeorgi Gerganov
2023-11-21stablelm : simplify + speedup generation (#4153)Galunid
2023-11-20finetune - update readme to mention llama support only (#4148)Galunid
2023-11-20readme : update ROCm Windows instructions (#4122)Aaryaman Vasishta
* Update README.md * Update README.md Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> --------- Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
2023-11-20main : Add ChatML functionality to main example (#4046)Seb C
Co-authored-by: Sebastian Cramond <sebby37@users.noreply.github.com>
2023-11-20ci : add flake8 to github actions (python linting) (#4129)Galunid
Disabled rules: * E203 Whitespace before ':' - disabled because we often use 'C' Style where values are aligned * E211 Whitespace before '(' (E211) - disabled because we often use 'C' Style where values are aligned * E221 Multiple spaces before operator - disabled because we often use 'C' Style where values are aligned * E225 Missing whitespace around operator - disabled because it's broken so often it seems like a standard * E231 Missing whitespace after ',', ';', or ':' - disabled because we often use 'C' Style where values are aligned * E241 Multiple spaces after ',' - disabled because we often use 'C' Style where values are aligned * E251 Unexpected spaces around keyword / parameter equals - disabled because it's broken so often it seems like a standard * E261 At least two spaces before inline comment - disabled because it's broken so often it seems like a standard * E266 Too many leading '#' for block comment - sometimes used as "section" separator * E501 Line too long - disabled because it's broken so often it seems like a standard * E701 Multiple statements on one line (colon) - broken only in convert.py when defining abstract methods (we can use# noqa instead) * E704 Multiple statements on one line - broken only in convert.py when defining abstract methods (we can use# noqa instead)
2023-11-20speculative : fix prompt tokenization in speculative example (#4025)Branden Butler
* Support special tokens and not adding BOS to prompt in speculative * Adapt to new should_add_bos function * Ensure tgt and dft have same add_bos setting
2023-11-19Revert "finetune : add --n-gpu-layers flag info to --help (#4128)"Georgi Gerganov
This reverts commit 05e8301e4593e2a67b4bae24f093dd12ce5cc7c2.
2023-11-19finetune : add --n-gpu-layers flag info to --help (#4128)Clark Saben
2023-11-19server : relay error messages (#4131)SoftwareRenderer
2023-11-19common : comma should be semicolon (#4137)kchro3
2023-11-19gitignore : tokenizeGeorgi Gerganov
2023-11-19gguf-py : export chat templates (#4125)slaren
* gguf-py : export chat templates * llama.cpp : escape new lines in gguf kv info prints * gguf-py : bump version * gguf-py : check chat_template type * gguf-py : initialize chat_template
2023-11-18tokenize example: Respect normal add BOS token behavior (#4126)Kerfuffle
Allow building with Makefile
2023-11-18scripts : Remove missed baichuan convert script (#4127)Galunid
2023-11-18Clean up ggml-cuda.cu warnings when compiling with clang (for ROCM) (#4124)Kerfuffle
* ggml-cuda.cu: Clean up warnings when compiling with clang * ggml-cuda.cu: Move static items into anonymous namespace * ggml-cuda.cu: Fix use of namespace start macro * Revert "ggml-cuda.cu: Fix use of namespace start macro" This reverts commit 26c11490266c096e3e5731e05270a8f73a5b2874. * Revert "ggml-cuda.cu: Move static items into anonymous namespace" This reverts commit e29757e0f7535d1ac314300f0324684cc785e06c.
2023-11-17llama : increase max nodes (#4115)slaren
2023-11-17build : support ppc64le build for make and CMake (#3963)Roger Meier
* build: support ppc64le build for make and CMake * build: keep __POWER9_VECTOR__ ifdef and extend with __powerpc64__ Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-11-17tokenize : fix trailing whitespaceGeorgi Gerganov
2023-11-17examples : add tokenize (#4039)zakkor
2023-11-17convert : use 'model' value if it exists. This allows karpathy/tinyllamas to ↵Don Mahurin
load (#4089) Co-authored-by: Don Mahurin <@>
2023-11-17py : Falcon HF compatibility (#4104)John
Falcon HF compatibility
2023-11-17common : improve yaml log escaping (#4080)Jannis Schönleber
* logging: improve escaping in yaml output * logging: include review feedback
2023-11-17llava : fix compilation warning that fread return value is not used (#4069)Huawei Lin
2023-11-17py : remove superfluous import statements (#4076)Jiří Podivín
Signed-off-by: Jiri Podivin <jpodivin@gmail.com> Co-authored-by: Jiri Podivin <jpodivin@redhat.com>
2023-11-17train : move number of gpu layers argument parsing to common/train.cpp (#4074)Jiří Podivín
- introduces help entry for the argument - cuts '--gpu-layers' form in order to simplify usage and documentation. Signed-off-by: Jiri Podivin <jpodivin@gmail.com> Co-authored-by: Jiri Podivin <jpodivin@redhat.com>
2023-11-17llama : add functions to get the model's metadata (#4013)slaren
* llama : add functions to get the model's metadata * format -> std::to_string * better documentation
2023-11-17finetune : speed-up ggml_compute_forward_out_prod_f32 via BLAS (#4079)gwjr
* Remove logically superfluous assertions and order by dimension * Use cblas_sgemm() to implement ggml_compute_forward_out_prod() * Remove ggml_compute_forward_out_prod_use_blas(), fix compiling errors on cmake/zig, remove trailing whitespace * Add openBLAS support for sgemm() in compute_forward_out_prod()
2023-11-17finetune : zero the loraB initial vectors (#4082)Andrew Godfrey
* finetune : zero the loraB initial vectors Without this, the first iteration is starting out far from the base model, instead of exactly on it. Zeroing loraB is what the paper recommends. loralib also zeroes at least one of the init vector pairs (though it departs from the paper in using a different distribution for the other vector, in some cases). * tabs to spaces * Use ggml_set_zero instead of adding a new function
2023-11-17cuda : get_row_rounding F32 (#4095)Andrew Godfrey
* Fix #4017 * Update ggml-cuda.cu Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * Update ggml-cuda.cu Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> --------- Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
2023-11-17llama : fix data units (#4101)Georgi Gerganov
* llama : fix data units ggml-ci * Revert "llama : fix data units" This reverts commit f5feac831fe225ed7f3db938d115732a49dccfc4. * llama : disambiguate data units ggml-ci
2023-11-16Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)Kerfuffle
* gguf-py: gguf-dump: Respect --no-tensor flag in JSON mode. * Respect add_bos_token GGUF metadata value * gguf-py: Try to fix SpecialVocab giving up too easily for the Nth time
2023-11-16gguf : fix potential infinite loops while parsing (#4100)texmex76
Co-authored-by: Bernhard Gstrein <gstrein@cs.uni-freiburg.de>
2023-11-15llama : restore prefix space in llama tokenizer (#4081)Jared Van Bortel
2023-11-15ggml-cuda : increase max graph size (#4084)slaren
2023-11-14Fix MacOS Sonoma model quantization (#4052)Michael Potter
Co-authored-by: Jared Van Bortel <jared@nomic.ai> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>