summaryrefslogtreecommitdiff
path: root/examples
AgeCommit message (Collapse)Author
2024-05-25android : module (#7502)Elton Kola
* move ndk code to a new library * add gradle file
2024-05-25Make tokenize CLI tool have nicer command line arguments. (#6188)Mikko Juola
* Make tokenizer.cpp CLI tool nicer. Before this commit, tokenize was a simple CLI tool like this: tokenize MODEL_FILENAME PROMPT [--ids] This simple tool loads the model, takes the prompt, and shows the tokens llama.cpp is interpreting. This changeset makes the tokenize more sophisticated, and more useful for debugging and troubleshooting: tokenize [-m, --model MODEL_FILENAME] [--ids] [--stdin] [--prompt] [-f, --file] [--no-bos] [--log-disable] It also behaves nicer on Windows now, interpreting and rendering Unicode from command line arguments and pipes no matter what code page the user has set on their terminal. * style fix: strlen(str) == 0 --> *str == 0 * Simplify tokenize.cpp; by getting rid of handling positional style arguments. It must now be invoked with long --model, --prompt etc. arguments only. Shortens the code. * tokenize.cpp: iostream header no longer required --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> Co-authored-by: brian khuu <mofosyne@gmail.com>
2024-05-24add build shared lib in win release package (#7438)Neo Zhang
2024-05-23ggml : remove ggml_flash_attn and ggml_flash_ff (#7463)Georgi Gerganov
ggml-ci
2024-05-23main : minor (#7462)Georgi Gerganov
2024-05-23SimpleChat: a simple and dumb web front end for testing /chat/completions ↵HanishKVC
and /completions end points and try chat (#7350) * SimpleChat: Add a skeletal html page Contains a div placeholder for showing chat messages till now a text-input for allowing user to enter next chat message/query to the model. a submit button to allow sending of the user entered message and chat till now to the model. * SimpleChat: A js skeleton with SimpleChat class Allows maintaining an array of chat message. Allows adding chat message (from any of the roles be it system, user, assistant, ...) Allows showing chat messages till now, in a given div element. * SimpleChat: request_json, globals, startme * SimpleChatJS: Roles Class, submitClick Define Role class with static members corresponding to the roles. Update startme to * Get hold of the ui elements. * Attach a click handler to submit button, which adds the user input to xchats array and shows the chat messages till now in chat div element. Trap DOMContentLoaded to trigger startme * SimpleChat:HTML: Bring in the js file * SimpleChat: Rather value wrt input text element * SimpleChat: Also add completions related prompt * SimpleChat: Use common helper logic wrt json data * SimpleChat: Move handling of submit request into its own func * SimpleChat: Try handshake with llm over its web service endpoint * SimpleChat:JS: Extract model response and show to user * SimpleChat:JS: Messages/Prompt, indicate working to end user * SimpleChat: Try keep input element in view * SimpleChat: Diff user/assistant msgs, Make input wider Also show a default message to user Also add some metas * SimpleChat: Move into its own sub directory to avoid confusion * SimpleChat:sh: Add simple shell script to run python3 http.server So one needs to run the llm server locally then run this script and access it using a local browser * SimpleChat:JS: Try trap enter key press wrt input text field So user can either press submit button or press enter key * SimpleChat: Allow user to select chat or completion mode * SimpleChat: Dont submit if already submitted and waiting Also make chat the default selection wrt mode * SimpleChat:JS: Handle difference in response Try read the assistance response from appropriate field in the response got. Also examples/server seems to return the response in a slightly different field, so try account for that also. * SimpleChat:JS: Force completion mode be single message by default * SimpleChat: Add a simple readme file * SimpleChat:HTML: Cleanup/structure UI a bit, Add input for system * SimpleChat:Allow system prompt to be set, if provided before user * SimpleChat: Ignore empty user input, without trimming * SimpleChat:Alert user if they provide sysprompt late or change it * SimpleChat: Move handling systemprompt into its own func * SimpleChat:HTML: Add a style for system role message * SimpleChat: Update the readme file * SimpleChat:CSS: Move style info into its own css file To keep it simple, clean and seperate so that things are not unnecessarily cluttered. * SimpleChat:CSS: Allow for chat div to be scrollable * SimpleChat:JS: Try ensure the last entry in chat is visible Needed because now only the chat div is scrollable and not the full page. In last commit the chat div size was fixed to 75% vertical height, so the full page no longer scrolls, so the old bring user-input element to view wont work, instead now the last element in the chat div should be brought into view. * SimpleChat:JS: bottom of element visible, Set focus to user input As the generated text could be multiple lines and occupy more space that the full scrollable div's vertical space, make the bottom of the last element (which can be such a generated text) in the div visible by scrolling. Ensure that the user input box has focus * SimpleChat: Update notes a bit. Try keep browser happy Avoid browser quirk mode with DOCTYPE. Help with accessibility a bit by specifying the language explicitly. Specify the char encoding explicitly, inturn utf-8 is a safe bet, even with intermixing of languages if reqd in future. Add a cache-control http-equiv meta tag, which in all probability will be ignored. Defer js loading and execution, just for fun and future, not that critical here as it stands now. * SimpleChat:HTML:Group user input+btn together; Note about multichat * SimpleChat:JS: Allow for changing system prompt anytime for future * SimpleChat:Readme: Note about handle_systemprompt begin/anytime * SimpleChat:HTML: Add viewport meta for better mobile friendliness Without this the page content may look too small. * SimpleChat:HtmlCss: Cleanup UI flow set margin wrt vmin rather than vw or vh so portrait/landscape ok. Use flex and flex-grow to put things on the same line as well as distribute available space as needed. Given two main elements/line so it remains simple. In each line have one element with grows and one sits with a basic comfortably fixed size. * SimpleChat: textarea for multiline user chat, inturn shift+enter 4 enter * SimpleChat: Make vertical layout better responsive (flex based) Also needed to make things cleaner and properly usable whether landscape or portrait, after changing to multiline textarea rather than single line user input. Avoid hardcoding the chat-till-now display area height, instead make it a flex-growable within a flex column of ui elements within a fixed vertical area. * SimpleChat: Rename simplechat.html to index.html, update readme Instead of providing a seperate shell script, update the readme wrt how to run/use this web front end. * SimpleChat: Screen fixed view and scrolling, Printing full * SimpleChat:JS:CI: Avoid space at end of jsdoc param line * SimpleChat:JS: MultiChat initial skeleton Will help maintain multiple independent chats in future * SimpleChat:JS: Move system prompt begin/anytime into SimpleChat * SimpleChat:JS:Keep MultiChatUI simple for now Worry about different chats with different servers for later. * SimpleChat:JS: Move handle submit into MultiChat, build on same Create an instance of MultiChatUI and inturn a instance of chat session, which is what the UI will inturn work on. * SimpleChat:JS: Move to dictionary of SimpleChat, instead of array * SimpleChat: Move ui elements into MultiChatUI, Update el IDs Move ui elements into MultiChatUI, so that current handleUserSubmit doesnt need to take the element arguments. Also in future, when user is allowed to switch between different chat sessions, the UI can be updated as needed by using the elements in UI already known to MultiChatUI instance. Rename the element ids' so that they follow a common convention, as well as one can identify what the element represents in a more consistant manner. * SimpleChat:MCUI:Show available chat sessions, try switch btw them Previous commits brought in / consolidated existing logic into MultiChatUI class. Now start adding logic towards multichat support * show buttons indicating available chat sessions * on sessin button click, try switch to that session * SimpleChat:MCUI: Store and use current chat session id Also allow to switch chat session optionally, wrt some of the related helpers. setup for two chat sessions by default. * SimpleChat:MCUI: Delay enabling user-input to avoid race Re-enable user-input, only after response to a user query has been updated to the chat-div. This ensures that if user tries to switch chat session, it wont be allowed till chat-request-response flow is done. * SimpleChat: Take care of system prompt Helper to get the latest system prompt and inturn use same to set the system prompt ui, when switching. Ensure that system prompt is set if and when enter key is pressed. * SimpleChat:GetSystemLatest, fix a oversight. * SimpleChat:MCUI: Allow selected chat-session btn to be highlighted Also have a general helper for setting class of children. * SimpleChat:Cleanup corners Show system prompt in chat space, when it is set by pressing enter, as a feedback to user. Alert user, if they try to switch chat session in the middle of waiting for a response from the ai model. * SimpleChat:MCUI: Ensure req-resp failure doesnt lock up things * SimpleChat:MCUI: Support for new chat sessions Also a general create button helper. * SimpleChat:MCUI: CreateSessionBtn helper, use wrt NewChat Also fix a oversight wrt using stale data wrt the list of chat sessions. * SimpleChat:MCUI: NewChat btn first before existing chat sessions * SimpleChat:MCUI:CornerCases:Skip new chat, show only if current Skip NewChat if user cancels or if one waiting for response from the ai model. Dont show a chat with newly got ai model response, if current chat session has changed, some how. Chat session shouldnt be allowed to change, if there is a pending response, but still as a additional sanity check. * SimpleChat: Update readme, title, show usage if no chat to show * SimpleChat: Cleanup the log/dialog messages a bit
2024-05-22common : normalize naming style (#7462)Georgi Gerganov
* common : normalize naming style ggml-ci * common : match declaration / definition order * zig : try to fix build
2024-05-22phi3 : duplicate rope factors in each layer (#7447)slaren
* phi3 : duplicate rope factors in each layer phi3 : set phi-3 model type as 14B model loader : simplify the process for duplicating model tensors llama-bench : remove default pg test * replace bool parameters in llama_model_loader with named flags
2024-05-21llama : add phi3 128K model support (#7225)liuwei-git
* add phi3 128k support in convert-hf-to-gguf * add phi3 128k support in cuda * address build warnings on llama.cpp * adjust index value in cuda long rope freq factors * add long rope support in ggml cpu backend * make freq factors only depend on ctx size * remove unused rope scaling type 'su' frin gguf converter * fix flint warnings on convert-hf-to-gguf.py * set to the short freq factor when context size is small than trained context size * add one line of comments * metal : support rope freq_factors * ggml : update ggml_rope_ext API to support freq. factors * backends : add dev messages to support rope freq. factors * minor : style * tests : update to use new rope API * backends : fix pragma semicolons * minor : cleanup * llama : move rope factors from KV header to tensors * llama : remove tmp assert * cuda : fix compile warning * convert : read/write n_head_kv * llama : fix uninitialized tensors --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-05-21`grammars`: fix resampling logic regression (#7424)Olivier Chafik
2024-05-21examples: cache hf model when --model not provided (#7353)Amir
* examples: cache hf model when --model not provided * examples: cache hf model when --model not provided * examples: cache hf model when --model not provided * examples: cache hf model when --model not provided * examples: cache hf model when --model not provided
2024-05-21Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) (#7425)jaime-m-p
* Update brute force test: add_special * Update brute force test: default values for add_bos_token and add_eos_token * Enable rtrim when pre-inserting BOS Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Revert "server : fix test regexes"
2024-05-20Tokenizer SPM fixes for phi-3 and llama-spm (#7375)jaime-m-p
* Update brute force test: special tokens * Fix added tokens - Try to read 'added_tokens.json'. - Try to read 'tokenizer_config.json'. - Try to read 'tokenizer.json'. * Fix special tokens rtrim Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * server : fix test regexes
2024-05-20perplexity: update README FP16 results [no ci] (#7413)Johannes Gäßler
2024-05-20server : fix temperature + disable some tests (#7409)Georgi Gerganov
* server : fix temperature * server : disable tests relying on parallel determinism * ci : change server Debug -> RelWithDebInfo
2024-05-20server : tuning tests (#7388)Georgi Gerganov
* server : don't pass temperature as string * server : increase timeout * tests : fix the fix 0.8f -> 0.8 ggml-ci * tests : set explicit temperature
2024-05-20server : return error on too large embedding input (#7389)Georgi Gerganov
2024-05-20tests : fix --keep_split -> --keep-split (#7374)Georgi Gerganov
2024-05-19quantize : fix --keep-split check (#7374)Fred Douglas
2024-05-19server: add test for token probs (#7347)Johannes Gäßler
2024-05-19server: fix seed being reported back (#7382)Johannes Gäßler
2024-05-19cmake : update android comments (#7341)Georgi Gerganov
2024-05-18android : use "ci-android" branch for CI (#7341)Georgi Gerganov
* android : use "ci-android" branch for CI * ggml : disable SIMD exp and silu for 32-bit ARM ggml-ci * android : do not fetch, use add_subdirectory instead * cmake : provide binary dir
2024-05-18server: correct --threads documentation [no ci] (#7362)Johannes Gäßler
2024-05-18perplexity : ndot progress and show stats with < 100 tasks (#7348)strawberrymelonpanda
Fix floating point error with ndot printing, allow end stats on lower task numbers if multiple-choice tasks.
2024-05-17rpc : set SO_REUSEADDR for the server socket (#7320)Radoslav Gerganov
ref: #7293
2024-05-17server : add support for the RPC backend (#7305)Radoslav Gerganov
ref: #7292
2024-05-17[Server] Added --verbose option to README [no ci] (#7335)Leon Knauer
2024-05-16Revert "server bench: fix bench not waiting for model load (#7284)" (#7334)Pierrick Hymbert
This reverts commit 583fd6b000ec9ad1b465b5c98524f4a0ae388077.
2024-05-16rpc : get available mem for the CPU backendRadoslav Gerganov
This can be overridden with the -m command line option ref: #7293
2024-05-16rpc : add command line arg for specifying backend memoryRadoslav Gerganov
ref: #7293
2024-05-16doc: add references to hugging face GGUF-my-repo quantisation web tool. (#7288)Vaibhav Srivastav
* chore: add references to the quantisation space. * fix grammer lol. * Update README.md Co-authored-by: Julien Chaumond <julien@huggingface.co> * Update README.md Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Julien Chaumond <julien@huggingface.co> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-05-15ggml : tag ggml_tensor::backend as deprecated (#7290)slaren
2024-05-15embedding : free the batch after execution (#7297)dm4
2024-05-15server bench: fix bench not waiting for model load (#7284)Johannes Gäßler
2024-05-14server: free sampling contexts on exit (#7264)Steve Grubb
* server: free sampling contexts on exit This cleans up last leak found by the address sanitizer. * fix whitespace * fix whitespace
2024-05-14Revert "move ndk code to a new library (#6951)" (#7282)Brian
This reverts commit efc8f767c8c8c749a245dd96ad4e2f37c164b54c.
2024-05-14ggml : add RPC backend (#6829)Radoslav Gerganov
* ggml : add RPC backend The RPC backend proxies all operations to a remote server which runs a regular backend (CPU, CUDA, Metal, etc). * set TCP_NODELAY * add CI workflows * Address review comments * fix warning * implement llama_max_devices() for RPC * Address review comments * Address review comments * wrap sockfd into a struct * implement get_alignment and get_max_size * add get_device_memory * fix warning * win32 support * add README * readme : trim trailing whitespace * Address review comments * win32 fix * Address review comments * fix compile warnings on macos
2024-05-14move ndk code to a new library (#6951)Elton Kola
2024-05-14docs: Fix typo and update description for --embeddings flag (#7026)Ryuei
- Change '--embedding' to '--embeddings' in the README - Update the description to match the latest --help output - Added a caution about defining physical batch size
2024-05-14llava-cli: fix base64 prompt (#7248)k.h.lai
2024-05-13perplexity: add BF16 vs. FP16 results (#7150)Johannes Gäßler
2024-05-13change default temperature of OAI compat API from 0 to 1 (#7226)Benjamin Findley
* change default temperature of OAI compat API from 0 to 1 * make tests explicitly send temperature to OAI API
2024-05-11fix system prompt handling (#7153)Xuan Son Nguyen
2024-05-11server : free llama_batch on exit (#7212)Steve Grubb
* [server] Cleanup a memory leak on exit There are a couple memory leaks on exit of the server. This hides others. After cleaning this up, you can see leaks on slots. But that is another patch to be sent after this. * make tab into spaces
2024-05-11server: fix reported top tokens for temperature 0 (#7203)Johannes Gäßler
2024-05-11llama : add Jina Embeddings architecture (#6826)Joan Fontanals
* feat: first things to do * feat: create tensors for Jina architecture * fix: use other tensors * feat: embedding gets results * fix: fix usage of ALIBI * fix: clean prints * fix: do some cleanup unused vars * fix: revert changes to Makefile and CMakeLists * fix: revert some changes * fix: fix small detail * fix: fix convert formatting * fix: fix linting and editor * feat: set proper vocab settings * fix: JinaBertForMaskedLM registration * feat: support q_normalization and k_normalization in Jina arch * feat: handle gpt2 tokenizer with Jina architecture * feat: example comments in embedding * feat: rename Jina Bert to Jina Bert V2 * fix: add some changes as per review * feat: proper KQ_pos for Jina embeddings * feat: add capacity to load models ES and DE for Spanish * llama : fix pre-tokenizers * ggml : full ALiBi support * ggml : update ggml_soft_max_ext() CUDA, SYCL * ggml : ggml_flash_attn_ext() support ALiBi (CPU) * ggml : ggml_flash_attn_ext() support ALiBi (Metal) * ggml : fix warning * ggml : ggml_flash_attn_ext() support ALiBi (CUDA) ggml-ci * minor : clean-up * embedding : add warning about missing SEP --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-05-10llama-bench : add pp+tg test type (#7199)slaren
2024-05-10Fix memory bug in grammar parser (#7194)Justine Tunney
The llama.cpp grammar parser had a bug where forgetting to add a closing quotation mark to strings would cause parsing to crash. Anyone running a server on a public endpoint is advised to upgrade. To reproduce this bug ./llamafile -m foo.gguf -p bar --grammar 'root::="' Credit for discovering and reporting this issue goes to Eclypsium Security Researcher Richard Johnson <Richard.johnson@eclypsium.com>.
2024-05-10Main+: optionally allow special tokens from user in interactive mode (#7097)HanishKVC
@hanishkvc added a new `--interactive-specials` flag which would allow for inserting special tokens from user side into the embedding stream.