Age | Commit message (Collapse) | Author |
|
* Use mmap in torch load, prefer .bin files when loading
* Revert .bin > .safetensors preference
|
|
* reserve space for codepoints
* improvement for the appended 0
|
|
|
|
|
|
* Add openai-compatible POST /v1/chat/completions API endpoint to server example
* fix code style
* Update server README.md
* Improve server README.md
* Fix server.cpp code style according to review
* server : some style changes
* server : indentation
* server : enable special tokens during tokenization by default
* server : minor code style
* server : change random string generator
* straightforward /v1/models endpoint
---------
Co-authored-by: kir-gadjello <111190790+kir-gadjello@users.noreply.github.com>
Co-authored-by: Tobi Lütke <tobi@Tobis-MacBook-Pro.local>
|
|
|
|
* ggml-cuda : support stablelm rope
* remove unused freq_base kernel parameter
* add n_dims parameter to llm_build_k_shift, default to n_rot via overload
* llama : fix llm_build_k_shift args
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
|
|
llama_token_eos(const struct llama_model *) is currently getting struct llama_context type variable context as a parameter.
|
|
* Update README.md to use PATH for Windows ROCm
* Update README.md
* Update README.md
|
|
* Fix incorrect format strings and uninitialized variables.
* Address comments
* Add the missing include statement
|
|
* llama : keep track of used KV cells + better KV cache management
* llama : zero KV cache used upon clear
ggml-ci
* llama : allow exporting a view of the KV cache (#4180)
* Allow exporting a view of the KV cache
* Allow dumping the sequences per cell in common
* Track max contiguous cells value and position as well
* Fix max contiguous empty cells index calculation
Make dump functions deal with lengths or sequences counts > 10 better
* Fix off by one error in dump_kv_cache_view
* Add doc comments for KV cache view functions
Eliminate cell sequence struct; use llama_seq_id directly
Minor cleanups
* common : add -dkvc arg for enabling kv cache dumps
---------
Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
|
|
|
|
Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
|
|
|
|
|
|
|
|
* Update README.md
* Update README.md
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
---------
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
|
|
Co-authored-by: Sebastian Cramond <sebby37@users.noreply.github.com>
|
|
Disabled rules:
* E203 Whitespace before ':' - disabled because we often use 'C' Style where values are aligned
* E211 Whitespace before '(' (E211) - disabled because we often use 'C' Style where values are aligned
* E221 Multiple spaces before operator - disabled because we often use 'C' Style where values are aligned
* E225 Missing whitespace around operator - disabled because it's broken so often it seems like a standard
* E231 Missing whitespace after ',', ';', or ':' - disabled because we often use 'C' Style where values are aligned
* E241 Multiple spaces after ',' - disabled because we often use 'C' Style where values are aligned
* E251 Unexpected spaces around keyword / parameter equals - disabled because it's broken so often it seems like a standard
* E261 At least two spaces before inline comment - disabled because it's broken so often it seems like a standard
* E266 Too many leading '#' for block comment - sometimes used as "section" separator
* E501 Line too long - disabled because it's broken so often it seems like a standard
* E701 Multiple statements on one line (colon) - broken only in convert.py when defining abstract methods (we can use# noqa instead)
* E704 Multiple statements on one line - broken only in convert.py when defining abstract methods (we can use# noqa instead)
|
|
* Support special tokens and not adding BOS to prompt in speculative
* Adapt to new should_add_bos function
* Ensure tgt and dft have same add_bos setting
|
|
This reverts commit 05e8301e4593e2a67b4bae24f093dd12ce5cc7c2.
|
|
|
|
|
|
|
|
|
|
* gguf-py : export chat templates
* llama.cpp : escape new lines in gguf kv info prints
* gguf-py : bump version
* gguf-py : check chat_template type
* gguf-py : initialize chat_template
|
|
Allow building with Makefile
|
|
|
|
* ggml-cuda.cu: Clean up warnings when compiling with clang
* ggml-cuda.cu: Move static items into anonymous namespace
* ggml-cuda.cu: Fix use of namespace start macro
* Revert "ggml-cuda.cu: Fix use of namespace start macro"
This reverts commit 26c11490266c096e3e5731e05270a8f73a5b2874.
* Revert "ggml-cuda.cu: Move static items into anonymous namespace"
This reverts commit e29757e0f7535d1ac314300f0324684cc785e06c.
|
|
|
|
* build: support ppc64le build for make and CMake
* build: keep __POWER9_VECTOR__ ifdef and extend with __powerpc64__
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
|
|
|
|
load (#4089)
Co-authored-by: Don Mahurin <@>
|
|
Falcon HF compatibility
|
|
* logging: improve escaping in yaml output
* logging: include review feedback
|
|
|
|
Signed-off-by: Jiri Podivin <jpodivin@gmail.com>
Co-authored-by: Jiri Podivin <jpodivin@redhat.com>
|
|
- introduces help entry for the argument
- cuts '--gpu-layers' form in order to simplify usage and documentation.
Signed-off-by: Jiri Podivin <jpodivin@gmail.com>
Co-authored-by: Jiri Podivin <jpodivin@redhat.com>
|
|
* llama : add functions to get the model's metadata
* format -> std::to_string
* better documentation
|
|
* Remove logically superfluous assertions and order by dimension
* Use cblas_sgemm() to implement ggml_compute_forward_out_prod()
* Remove ggml_compute_forward_out_prod_use_blas(), fix compiling errors on cmake/zig, remove trailing whitespace
* Add openBLAS support for sgemm() in compute_forward_out_prod()
|
|
* finetune : zero the loraB initial vectors
Without this, the first iteration is starting out far from the base model, instead of exactly on it.
Zeroing loraB is what the paper recommends. loralib also zeroes at least one of the init vector pairs
(though it departs from the paper in using a different distribution for the other vector, in some cases).
* tabs to spaces
* Use ggml_set_zero instead of adding a new function
|
|
* Fix #4017
* Update ggml-cuda.cu
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
* Update ggml-cuda.cu
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
---------
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
|
|
* llama : fix data units
ggml-ci
* Revert "llama : fix data units"
This reverts commit f5feac831fe225ed7f3db938d115732a49dccfc4.
* llama : disambiguate data units
ggml-ci
|
|
* gguf-py: gguf-dump: Respect --no-tensor flag in JSON mode.
* Respect add_bos_token GGUF metadata value
* gguf-py: Try to fix SpecialVocab giving up too easily for the Nth time
|
|
Co-authored-by: Bernhard Gstrein <gstrein@cs.uni-freiburg.de>
|
|
|
|
|
|
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|