summaryrefslogtreecommitdiff
path: root/examples
AgeCommit message (Collapse)Author
2023-12-14ggml : add ggml_row_size() (fixes llama out of space) (#4461)LostRuins
* Fixes "Not enough space in the context's memory pool" encountered on certain models, which seems to be caused by some imprecision related to the automatic casting of floating point values * do not cast to size_t, instead just use doubles * ggml : add ggml_row_size(), deprecate ggml_type_sizef() * ggml : fix row size compute to avoid overflows * tests : fix sizey -> sizez --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-12-13server : fix handling of characters that span multiple tokens when streaming ↵shibe2
(#4446)
2023-12-12server : tweak default sampling parameters (#4367)kalomaze
* Set a more typical Top P setting as the default * Update temp max
2023-12-12english : use `typos` to fix comments and logs (#4354)Richard Kiss
2023-12-12server : fix local model name in server (#4420)Vladimir Zorin
2023-12-10Update README.md (#4388)Yueh-Po Peng
Fix small typo.
2023-12-07llama : per-layer KV cache + quantum K cache (#4309)Georgi Gerganov
* per-layer KV * remove unnecessary copies * less code duplication, offload k and v separately * llama : offload KV cache per-layer * llama : offload K shift tensors * llama : offload for rest of the model arches * llama : enable offload debug temporarily * llama : keep the KV related layers on the device * llama : remove mirrors, perform Device -> Host when partial offload * common : add command-line arg to disable KV cache offloading * llama : update session save/load * llama : support quantum K cache (#4312) * llama : support quantum K cache (wip) * metal : add F32 -> Q8_0 copy kernel * cuda : add F32 -> Q8_0 copy kernel ggml-ci * cuda : use mmv kernel for quantum cache ops * llama : pass KV cache type through API * llama : fix build ggml-ci * metal : add F32 -> Q4_0 copy kernel * metal : add F32 -> Q4_1 copy kernel * cuda : wip * cuda : add F32 -> Q4_0 and F32 -> Q4_1 copy kernels * llama-bench : support type_k/type_v * metal : use mm kernel only for quantum KV cache * cuda : add comment * llama : remove memory_f16 and kv_f16 flags --------- Co-authored-by: slaren <slarengh@gmail.com> * readme : add API change notice --------- Co-authored-by: slaren <slarengh@gmail.com>
2023-12-07train : fix #4227 (double free in ↵Hongyu Ouyang
examples/train-text-from-scratch/train-text-from-scratch.cpp) (#4351) On commit b1108 (44c117f4) xaedes added ggml_allocr * alloc = NULL; ... (many lines in between) if (alloc) { ggml_allocr_free(alloc); } Which is correct, but it's easy to lose context after many lines in between. On commit b1287 (0e76a899) xaedes made a big change. From here on, alloc is freed eagerly. alloc = ggml_allocr_new(...) ... (short lines of code) ggml_allocr_free(alloc) This happens a few times, but alloc is never set to NULL, and many lines below, we still have if (alloc) { ggml_allocr_free(alloc); } which causes a double-free.
2023-12-06server : recognize cache_prompt parameter in OAI API (#4347)Georgi Gerganov
2023-12-06speculative : support `--color` (#4343)stduhpf
* speculative: add some colors * minor : add braces --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-12-05sampling : custom samplers order (#4285)MaggotHATE
* Samplers sequence order w parameter * Cleaned commented code * Fixed formatting * Rewrote with unordered_map * Revert and rewrite, too many problems and safeguards would be needed * Fixed code style * Code style fixes according to review * More readable samplers input string, fixed help * Style fix in sampler_queue * Formatting fixes * Fixing whitespaces
2023-12-04simple : update error message for KV cache check (#4324)Daniel Bevenius
This commit updates the error message that is printed when the KV cache is not big enough to hold all the prompt and generated tokens. Specifically it removes the reference to n_parallel and replaces it with n_len. Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2023-12-04swift : fix concatenation method to avoid invalid UTF8 stringfication (#4325)Miwa / Ensan
2023-12-04swift : fix prompt tokenization logic (#4321)Miwa / Ensan
2023-12-03server : fix OpenAI API `stop` field to be optional (#4299)Ed Lee
(cherry picked from commit Mozilla-Ocho/llamafile@e8c92bcb84ae3bcbf0d617b7ee6a5413bcbd58af)
2023-12-03py : add grammar to oai like api (#4294)Rickard Edén
2023-12-01llama : support optional tensors (#4283)Georgi Gerganov
2023-12-01swift : fix token_to_piece implementation (#4278)Miwa / Ensan
* Fix token_to_piece implementation in Swift * Fix errors
2023-12-01ggml : add ggml_soft_max_ext (#4256)Georgi Gerganov
* metal : implement soft_max_ext * cuda : implement soft_max_ext * ggml : implement soft_max_ext (CPU) * batched-bench : print threads ggml-ci * metal : simplify soft_max encoding ggml-ci * cuda : use 512 threads for soft_max instead of 32 * ggml : update soft max cpu * cuda : do warp-based block reduce * cuda : increase max block size to 1024 * cuda : fix warp reduction initialization of shared mem * metal : warp-based reduction for soft max kernel * metal : warp-based reduce for rms_norm * metal : simplify soft max kernel ggml-ci * alloc : fix build with debug
2023-12-01server : add --log-disable to disable logging to file (#4260)Ziad Ben Hadj-Alouane
* * add --log-disable to disable logging to file in the server example * * typo fix
2023-12-01server : add single-client multi-prompt support (#4232)Ziad Ben Hadj-Alouane
* * add multiprompt support * * cleanup * * more cleanup * * remove atomicity of id_gen, and change lock_guard to unique_lock on completion requests * * remove all references to mutex_multitasks * Update examples/server/server.cpp Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * Update examples/server/server.cpp Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * Update examples/server/server.cpp Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * Update examples/server/server.cpp Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * * change to set --------- Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
2023-11-30llava : ShareGPT4V compatibility (vision encoder only loading) (#4172)John
* ShareGPT4 compatibility (vision encoder only loading) Load only a CLIP vision encoder (as supplied by ShareGPT finetunes) Corrects the argument parsing for --img_mean and --img_std (which were previously not parsed but attempted to access) Defines defaults for img_mean and img_std which are equal to the llava 1.5 CLIP encoder, so you do not have to provide them * Update convert-image-encoder-to-gguf.py
2023-11-30main : pass LOG_TEE callback to llama.cpp log (#4033)Andrew Godfrey
* main : Call llama_log_set to use LOG_TEE * tabs to spaces
2023-11-30batched.swift : update README.md (#4214)Miwa / Ensan
docs: update how to run
2023-11-30py : fix oai proxy (#3972)rhjdvsgsgks
* fix oai proxy fix generation not stoped while bot stop talking in chat mode fix possible `slot_id` not exist response for cors (and pre flight) * oai proxy: workaround for some client (such as Chatbox) * use stop as separator to replace hardcoded `\n`
2023-11-29examples : add readme filesGeorgi Gerganov
2023-11-27examples : iOS example with swift ui (#4159)Bailey Chittle
* copy to llama.cpp as subdir * attempt enabling metal, fails * ggml metal compiles! * Update README.md * initial conversion to new format, utf8 errors? * bug fixes, but now has an invalid memory access :( * added O3, now has insufficient memory access * begin sync with master * update to match latest code, new errors * fixed it! * fix for loop conditionals, increase result size * fix current workflow errors * attempt a llama.swiftui workflow * Update .github/workflows/build.yml Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-11-26lookahead : support `-n -1` infinite generationGeorgi Gerganov
2023-11-26lookahead : add example for lookahead decoding (#4207)Georgi Gerganov
* lookahead : init * lookahead : generate and store n-grams * lookahead : use loop instead recursion to generate n-grams * lookahead : initial working implementation * lookahead : filter repeating n-grams * lookahead : use deterministic init * lookahead : add to Makefile * lookahead : fix a bug in the seq_id of the lookahead tokens * lookahead : add comments --------- Co-authored-by: slaren <slarengh@gmail.com>
2023-11-25server : OAI API compatibility (#4198)Georgi Gerganov
* Add openai-compatible POST /v1/chat/completions API endpoint to server example * fix code style * Update server README.md * Improve server README.md * Fix server.cpp code style according to review * server : some style changes * server : indentation * server : enable special tokens during tokenization by default * server : minor code style * server : change random string generator * straightforward /v1/models endpoint --------- Co-authored-by: kir-gadjello <111190790+kir-gadjello@users.noreply.github.com> Co-authored-by: Tobi Lütke <tobi@Tobis-MacBook-Pro.local>
2023-11-24main.swift : fix eos checking (#4197)eastriver
llama_token_eos(const struct llama_model *) is currently getting struct llama_context type variable context as a parameter.
2023-11-23Fix incorrect format strings and uninitialized variables. (#4133)Haohui Mai
* Fix incorrect format strings and uninitialized variables. * Address comments * Add the missing include statement
2023-11-23llama : KV cache view API + better KV cache management (#4170)Georgi Gerganov
* llama : keep track of used KV cells + better KV cache management * llama : zero KV cache used upon clear ggml-ci * llama : allow exporting a view of the KV cache (#4180) * Allow exporting a view of the KV cache * Allow dumping the sequences per cell in common * Track max contiguous cells value and position as well * Fix max contiguous empty cells index calculation Make dump functions deal with lengths or sequences counts > 10 better * Fix off by one error in dump_kv_cache_view * Add doc comments for KV cache view functions Eliminate cell sequence struct; use llama_seq_id directly Minor cleanups * common : add -dkvc arg for enabling kv cache dumps --------- Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
2023-11-23examples : fix typo in parallel example doc comment (#4181)Daniel Bevenius
Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2023-11-20finetune - update readme to mention llama support only (#4148)Galunid
2023-11-20main : Add ChatML functionality to main example (#4046)Seb C
Co-authored-by: Sebastian Cramond <sebby37@users.noreply.github.com>
2023-11-20speculative : fix prompt tokenization in speculative example (#4025)Branden Butler
* Support special tokens and not adding BOS to prompt in speculative * Adapt to new should_add_bos function * Ensure tgt and dft have same add_bos setting
2023-11-19Revert "finetune : add --n-gpu-layers flag info to --help (#4128)"Georgi Gerganov
This reverts commit 05e8301e4593e2a67b4bae24f093dd12ce5cc7c2.
2023-11-19finetune : add --n-gpu-layers flag info to --help (#4128)Clark Saben
2023-11-19server : relay error messages (#4131)SoftwareRenderer
2023-11-18tokenize example: Respect normal add BOS token behavior (#4126)Kerfuffle
Allow building with Makefile
2023-11-17tokenize : fix trailing whitespaceGeorgi Gerganov
2023-11-17examples : add tokenize (#4039)zakkor
2023-11-17llava : fix compilation warning that fread return value is not used (#4069)Huawei Lin
2023-11-17py : remove superfluous import statements (#4076)Jiří Podivín
Signed-off-by: Jiri Podivin <jpodivin@gmail.com> Co-authored-by: Jiri Podivin <jpodivin@redhat.com>
2023-11-17train : move number of gpu layers argument parsing to common/train.cpp (#4074)Jiří Podivín
- introduces help entry for the argument - cuts '--gpu-layers' form in order to simplify usage and documentation. Signed-off-by: Jiri Podivin <jpodivin@gmail.com> Co-authored-by: Jiri Podivin <jpodivin@redhat.com>
2023-11-17finetune : zero the loraB initial vectors (#4082)Andrew Godfrey
* finetune : zero the loraB initial vectors Without this, the first iteration is starting out far from the base model, instead of exactly on it. Zeroing loraB is what the paper recommends. loralib also zeroes at least one of the init vector pairs (though it departs from the paper in using a different distribution for the other vector, in some cases). * tabs to spaces * Use ggml_set_zero instead of adding a new function
2023-11-16Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)Kerfuffle
* gguf-py: gguf-dump: Respect --no-tensor flag in JSON mode. * Respect add_bos_token GGUF metadata value * gguf-py: Try to fix SpecialVocab giving up too easily for the Nth time
2023-11-13llava : fix regression for square images in #3613 (#4056)M. Yusuf Sarıgöz
2023-11-13sync : ggml (backend v2) (#3912)Georgi Gerganov
* sync : ggml (backend v2) (wip) * sync : migrate examples and llama.cpp to dynamic graphs (wip) * sync : update tests + fix max op params to 64 ggml-ci * sync : ggml-cuda ggml-ci * llama : fix save/load state context size ggml-ci * sync : try to fix build on tvOS * sync : pass custom graph sizes in training examples * sync : update graph copies to new ggml API * sync : update sync-ggml.sh with new files * scripts : fix header in sync script * train : fix context size calculations * llama : increase inference graph size up to 4096 nodes * train : allocate grads for backward graphs * train : allocate grads for gb_tmp