summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-11-17train : move number of gpu layers argument parsing to common/train.cpp (#4074)Jiří Podivín
- introduces help entry for the argument - cuts '--gpu-layers' form in order to simplify usage and documentation. Signed-off-by: Jiri Podivin <jpodivin@gmail.com> Co-authored-by: Jiri Podivin <jpodivin@redhat.com>
2023-11-17llama : add functions to get the model's metadata (#4013)slaren
* llama : add functions to get the model's metadata * format -> std::to_string * better documentation
2023-11-17finetune : speed-up ggml_compute_forward_out_prod_f32 via BLAS (#4079)gwjr
* Remove logically superfluous assertions and order by dimension * Use cblas_sgemm() to implement ggml_compute_forward_out_prod() * Remove ggml_compute_forward_out_prod_use_blas(), fix compiling errors on cmake/zig, remove trailing whitespace * Add openBLAS support for sgemm() in compute_forward_out_prod()
2023-11-17finetune : zero the loraB initial vectors (#4082)Andrew Godfrey
* finetune : zero the loraB initial vectors Without this, the first iteration is starting out far from the base model, instead of exactly on it. Zeroing loraB is what the paper recommends. loralib also zeroes at least one of the init vector pairs (though it departs from the paper in using a different distribution for the other vector, in some cases). * tabs to spaces * Use ggml_set_zero instead of adding a new function
2023-11-17cuda : get_row_rounding F32 (#4095)Andrew Godfrey
* Fix #4017 * Update ggml-cuda.cu Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * Update ggml-cuda.cu Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> --------- Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
2023-11-17llama : fix data units (#4101)Georgi Gerganov
* llama : fix data units ggml-ci * Revert "llama : fix data units" This reverts commit f5feac831fe225ed7f3db938d115732a49dccfc4. * llama : disambiguate data units ggml-ci
2023-11-16Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)Kerfuffle
* gguf-py: gguf-dump: Respect --no-tensor flag in JSON mode. * Respect add_bos_token GGUF metadata value * gguf-py: Try to fix SpecialVocab giving up too easily for the Nth time
2023-11-16gguf : fix potential infinite loops while parsing (#4100)texmex76
Co-authored-by: Bernhard Gstrein <gstrein@cs.uni-freiburg.de>
2023-11-15llama : restore prefix space in llama tokenizer (#4081)Jared Van Bortel
2023-11-15ggml-cuda : increase max graph size (#4084)slaren
2023-11-14Fix MacOS Sonoma model quantization (#4052)Michael Potter
Co-authored-by: Jared Van Bortel <jared@nomic.ai> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-11-14stablelm : StableLM support (#3586)Galunid
* Add support for stablelm-3b-4e1t * Supports GPU offloading of (n-1) layers
2023-11-13convert.py: also look for plain model.safetensors (#4043)afrideva
* add safetensors to convert.py help message * Check for single-file safetensors model * Update convert.py "model" option help message * revert convert.py help message change
2023-11-13llava : fix regression for square images in #3613 (#4056)M. Yusuf Sarıgöz
2023-11-13ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060)Georgi Gerganov
ggml-ci
2023-11-13readme : update hot topicsGeorgi Gerganov
2023-11-13sync : ggml (backend v2) (#3912)Georgi Gerganov
* sync : ggml (backend v2) (wip) * sync : migrate examples and llama.cpp to dynamic graphs (wip) * sync : update tests + fix max op params to 64 ggml-ci * sync : ggml-cuda ggml-ci * llama : fix save/load state context size ggml-ci * sync : try to fix build on tvOS * sync : pass custom graph sizes in training examples * sync : update graph copies to new ggml API * sync : update sync-ggml.sh with new files * scripts : fix header in sync script * train : fix context size calculations * llama : increase inference graph size up to 4096 nodes * train : allocate grads for backward graphs * train : allocate grads for gb_tmp
2023-11-13Add ReLU and SQR CUDA ops to (partially) fix Persimmon offloading (#4041)Kerfuffle
* Add ReLU and SQR CUDA ops to fix Persimmon offloading * Persimmon loader: More helpful error on CUDA/ROCM when offloading too many layers
2023-11-12gguf-py: gguf_writer: Use bytearray to build metadata (#4051)Kerfuffle
* gguf-py: gguf_writer: Use BytesIO to build metadata * Use bytearray instead Bump gguf-py package version
2023-11-11Fix some documentation typos/grammar mistakes (#4032)Richard Kiss
* typos * Update examples/parallel/README.md Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com> --------- Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
2023-11-11Fix gguf-convert-endian script (#4037)M. Yusuf Sarıgöz
* Fix gguf-convert-endian script * Bump version and update description
2023-11-10server : fix crash when prompt exceeds context size (#3996)Alexey Parfenov
2023-11-11gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981)Kerfuffle
* gguf-py: Refactor and add file reading support * Replay changes from #3871 Credit to @cebtenzzre for that pull * Various type annotation fixes. * sort imports with isort (again) * Fix missing return statement in add_tensor * style cleanup with flake8 * fix NamedTuple and Enum usage * Fix an issue with state init in GGUFReader Move examples to an examples/ directory Clean up examples Add an example of modifying keys in a GGUF file Update documentation with info on examples Try to support people importing gguf/gguf.py directly * Damagage is not a word. * Clean up gguf-py/examples/modify_gguf.py whitespace Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * Update gguf-py/examples/modify_gguf.py formatting Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * Update gguf-py/gguf/gguf_reader.py type hint Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * Make examples executable, formatting changes * Add more information to GGUFReader and examples comments * Include a gguf Python package version bump * Add convert-gguf-endian.py script * cleanup * gguf-py : bump minor version * Reorganize scripts * Make GGUFReader endian detection less arbitrary * Add JSON dumping support to gguf-dump.py Which I kind of regret now * A few for gguf-dump.py cleanups * Murder accidental tuple in gguf-py/scripts/gguf-dump.py Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * cleanup * constants : remove unneeded type annotations * fix python 3.8 compat * Set up gguf- scripts in pyproject.toml * And include scripts/__init__.py, derp * convert.py: We can't currently support Q8_0 on big endian. * gguf-py: SpecialVocab: Always try available sources for special token ids gguf-py: SpecialVocab: Try to load merges from merges.txt if not in tokenizer.json gguf-py: SpecialVocab: Add 'add_bos_token' type bools to GGUF metadata u * cleanup * Promote add_X_token to GGUF metadata for BOS and EOS --------- Co-authored-by: Jared Van Bortel <jared@nomic.ai> Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
2023-11-10server : allow continue edit on completion mode (#3950)Jhen-Jie Hong
* server : allow continue edit on completion mode * server : handle abort case in runCompletion * server : style improvement
2023-11-10Unbreak persimmon after #3837 (#4010)Galunid
2023-11-09scripts: Generalize convert scripts (#3838)Galunid
* Replace convert-*-hf-to-gguf.py files with convert-hf-to-gguf.py
2023-11-08server : add min_p param (#3877)Mihai
* Update server.cpp with min_p after it was introduced in https://github.com/ggerganov/llama.cpp/pull/3841 * Use spaces instead of tabs * Update index.html.hpp after running deps.sh * Fix test - fix line ending
2023-11-08ggml-alloc : fix backend assignments of views (#3982)slaren
2023-11-07gguf : track writer state, free unneeded tensors, cleanup (#3871)Jared Van Bortel
2023-11-07make : do not add linker flags when compiling static llava lib (#3977)Georgi Gerganov
2023-11-07ggml : fix backward rope after YaRN (#3974)xaedes
* fix backward process of rope rope backward process was broken after YaRN RoPE (#2268) implementation, due to missing changes in backward functions. the code for the backward process is nearly identically to the forward process: the only difference is the sign of the sin-values. to avoid future regressions remove the near-duplicate backward functions and reuse the forward code: for this a new function argument `bool forward` was added to `ggml_compute_forward_rope_f32` and `ggml_compute_forward_rope_f16`. the sin-values will be negated when forward is false. * fix finetune rope call to use correct default attn_factor of 1.0f * remove unused `ggml_rope_xpos_back` it is better to have only one `ggml_rope_back` function that accepts all rope parameters, so that `ggml_compute_backward` can propagate all parameters without having to switch between different rope_back variants. * fix comments explaining the sinus sign in ggml_forward_rope * add missing function arguments in declaration * fix function argument type in declaration
2023-11-07Use params when loading models in llava-cli (#3976)Matthew Tejo
llava-cli was loading models with default params and ignoring settings from the cli. This switches to a generic function to load the params from the cli options.
2023-11-07cuda : supports running on CPU for GGML_USE_CUBLAS=ON build (#3946)Meng Zhang
* protyping the idea that supports running on CPU for a GGML_USE_CUBLAS=on build * doc: add comments to ggml_cublas_loaded() * fix defined(...)
2023-11-07llava : expose as a shared library for downstream projects (#3613)Damian Stewart
* wip llava python bindings compatibility * add external llava API * add base64 in-prompt image support * wip refactor image loading * refactor image load out of llava init * cleanup * further cleanup; move llava-cli into its own file and rename * move base64.hpp into common/ * collapse clip and llava libraries * move llava into its own subdir * wip * fix bug where base64 string was not removed from the prompt * get libllava to output in the right place * expose llava methods in libllama.dylib * cleanup memory usage around clip_image_* * cleanup and refactor *again* * update headerdoc * build with cmake, not tested (WIP) * Editorconfig * Editorconfig * Build with make * Build with make * Fix cyclical depts on Windows * attempt to fix build on Windows * attempt to fix build on Windows * Upd TODOs * attempt to fix build on Windows+CUDA * Revert changes in cmake * Fix according to review comments * Support building as a shared library * address review comments --------- Co-authored-by: M. Yusuf Sarıgöz <yusufsarigoz@gmail.com> Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2023-11-05ggml-cuda : fix f16 mul mat (#3961)slaren
* ggml-cuda : fix f16 mul mat ggml-ci * silence common.cpp warning (bonus)
2023-11-05Allow common process_escapes to handle \x sequences (#3928)Kerfuffle
* Allow common process_escapes to handle \x sequences * Fix edge case when second hex digit is NUL
2023-11-05server : fix typo for --alias shortcut from -m to -a (#3958)Thái Hoàng Tâm
2023-11-05cuda : fix disabling device with --tensor-split 1,0 (#3951)Jared Van Bortel
Co-authored-by: slaren <slarengh@gmail.com>
2023-11-05llama : mark LLM_ARCH_STARCODER as full offload supported (#3945)Meng Zhang
as done in https://github.com/ggerganov/llama.cpp/pull/3827
2023-11-05cmake : MSVC instruction detection (fixed up #809) (#3923)Eve
* Add detection code for avx * Only check hardware when option is ON * Modify per code review sugguestions * Build locally will detect CPU * Fixes CMake style to use lowercase like everywhere else * cleanup * fix merge * linux/gcc version for testing * msvc combines avx2 and fma into /arch:AVX2 so check for both * cleanup * msvc only version * style * Update FindSIMD.cmake --------- Co-authored-by: Howard Su <howard0su@gmail.com> Co-authored-by: Jeremy Dunn <jeremydunn123@gmail.com>
2023-11-05ci : use intel sde when ci cpu doesn't support avx512 (#3949)Eve
2023-11-05cuda : revert CUDA pool stuff (#3944)slaren
* Revert "cuda : add ROCM aliases for CUDA pool stuff (#3918)" This reverts commit 629f917cd6b96ba1274c49a8aab163b1b189229d. * Revert "cuda : use CUDA memory pool with async memory allocation/deallocation when available (#3903)" This reverts commit d6069051de7165a4e06662c89257f5d2905bb156. ggml-ci
2023-11-04gguf-py: Support 01.AI Yi models (#3943)Kerfuffle
2023-11-03metal : round up to 16 to fix MTLDebugComputeCommandEncoder assertion (#3938)Peter Sugihara
2023-11-03ggml-metal: fix yarn rope (#3937)Xiao-Yong Jin
2023-11-03ggml-cuda : move row numbers to x grid dim in mmv kernels (#3921)slaren
2023-11-03speculative : change default p_accept to 0.5 + CLI args (#3919)Georgi Gerganov
ggml-ci
2023-11-03common : YAYF (yet another YARN fix) (#3925)Georgi Gerganov
ggml-ci
2023-11-03llama : change yarn_ext_factor placeholder to -1 (#3922)cebtenzzre
2023-11-02cuda : add ROCM aliases for CUDA pool stuff (#3918)Kerfuffle