summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-05-25ggml: aarch64: SVE kernels for q8_0_q8_0, q4_0_q8_0 vector dot (#7433)Masaya, Kato
* Add SVE support for q4_0_q8_0 q8_0_q8_0 * remove ifdef
2024-05-25android : module (#7502)Elton Kola
* move ndk code to a new library * add gradle file
2024-05-25fix missing slash in `fs_get_cache_directory()` (#7503)Xuan Son Nguyen
* fix missing slash in fs_get_cache_directory() * use LOCALAPPDATA for fs_get_cache_directory() * better code style
2024-05-25Make tokenize CLI tool have nicer command line arguments. (#6188)Mikko Juola
* Make tokenizer.cpp CLI tool nicer. Before this commit, tokenize was a simple CLI tool like this: tokenize MODEL_FILENAME PROMPT [--ids] This simple tool loads the model, takes the prompt, and shows the tokens llama.cpp is interpreting. This changeset makes the tokenize more sophisticated, and more useful for debugging and troubleshooting: tokenize [-m, --model MODEL_FILENAME] [--ids] [--stdin] [--prompt] [-f, --file] [--no-bos] [--log-disable] It also behaves nicer on Windows now, interpreting and rendering Unicode from command line arguments and pipes no matter what code page the user has set on their terminal. * style fix: strlen(str) == 0 --> *str == 0 * Simplify tokenize.cpp; by getting rid of handling positional style arguments. It must now be invoked with long --model, --prompt etc. arguments only. Shortens the code. * tokenize.cpp: iostream header no longer required --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> Co-authored-by: brian khuu <mofosyne@gmail.com>
2024-05-25gguf-py : fix and simplify quantized shape round-trip (#7483)compilade
* gguf-py : fix and simplify quantized shape round-trip * gguf-py : remove unused import
2024-05-24flake.lock: Update (#7232)Georgi Gerganov
Flake lock file updates: • Updated input 'flake-parts': 'github:hercules-ci/flake-parts/e5d10a24b66c3ea8f150e47dfdb0416ab7c3390e?narHash=sha256-yzcRNDoyVP7%2BSCNX0wmuDju1NUCt8Dz9%2BlyUXEI0dbI%3D' (2024-05-02) → 'github:hercules-ci/flake-parts/8dc45382d5206bd292f9c2768b8058a8fd8311d9?narHash=sha256-/GJvTdTpuDjNn84j82cU6bXztE0MSkdnTWClUCRub78%3D' (2024-05-16) • Updated input 'nixpkgs': 'github:NixOS/nixpkgs/63c3a29ca82437c87573e4c6919b09a24ea61b0f?narHash=sha256-4cPymbty65RvF1DWQfc%2BBc8B233A1BWxJnNULJKQ1EY%3D' (2024-05-02) → 'github:NixOS/nixpkgs/4a6b83b05df1a8bd7d99095ec4b4d271f2956b64?narHash=sha256-%2BNpbZRCRisUHKQJZF3CT%2Bxn14ZZQO%2BKjxIIanH3Pvn4%3D' (2024-05-17) Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-05-24docker.yml: disable light-intel and server-intel test (#7515)Brian
* docker.yml: disable light-intel test * docker.yml: disable server-intel test
2024-05-24Add support for ArcticForCausalLM (#7020)fairydreaming
* common : increase max number of experts to 128 * common : add tensor LLM_TENSOR_FFN_NORM_EXPS for normalization before MoE that runs in parallel to attention + ffn * gguf-py : add architecture-specific block mappings that override selected general block mappings * convert-hf : add model conversion support for ArcticForCausalLM * convert-hf : use added_tokens_decoder from tokenizer_config.json to redefine tokens from SentencePiece model (only for ArcticForCausalLM) * llama : add inference support for LLM_ARCH_ARCTIC --------- Co-authored-by: Stanisław Szymczyk <sszymczy@gmail.com>
2024-05-24add build shared lib in win release package (#7438)Neo Zhang
2024-05-23readme : remove trailing space (#7469)Georgi Gerganov
2024-05-23ggml : silence UB sanitizer error during iq2_xxs quantization (#0)Georgi Gerganov
2024-05-23Fix phi3 chat template confusion with zephyr (#7449)Tristan Druyen
* Fix phi3 template matching vs zephyr * Add regression test for new phi3 chat template * Implement review suggestions * Fix phi3 jinja test templates & match by <|end|> * Apply suggestion Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com> * Add all phi3 template variants in tests * Remove unneeded message trimming Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com> * Fix tests to not expect trimmed messages --------- Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
2024-05-23readme : add Bunny in supported models [no ci] (#7469)Raj Hammeer Singh Hada
2024-05-23llama : add getters for n_threads/n_threads_batch (#7464)Daniel Bevenius
* llama : add getters for n_threads/n_threads_batch This commit adds two new functions to the llama API. The functions can be used to get the number of threads used for generating a single token and the number of threads used for prompt and batch processing (multiple tokens). The motivation for this is that we want to be able to get the number of threads that the a context is using. The main use case is for a testing/verification that the number of threads is set correctly. Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com> * squash! llama : add getters for n_threads/n_threads_batch Rename the getters to llama_n_threads and llama_n_threads_batch. Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com> --------- Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-05-23ci : use Pythia models instead of OpenLlama (#7470)Georgi Gerganov
* ci : start using Pythia models over OpenLlama ggml-ci * ci : disable q2_k ppl tests * ci : use convert-hf-to-gguf.py * ci : update gg_get_model * ci : fix convert outfile name ggml-ci * llama : gptneox arch use F32 attn prec ggml-ci
2024-05-23readme : add GPT-NeoX + Pythia to the list of supported models (#7491)Victor Nogueira
2024-05-23Add missing inference support for GPTNeoXForCausalLM (Pythia and GPT-NeoX ↵fairydreaming
base models) (#7461) * convert-hf : add conversion of bloom-style qkv tensor to gpt-style qkv (code borrowed from BloomModel) * llama : add inference support for LLM_ARCH_GPTNEOX * llama : add model types for every Pythia variant and GPT-NeoX Co-authored-by: Stanisław Szymczyk <sszymczy@gmail.com>
2024-05-23llama : rename n_ctx -> cache.size, less confusing (#0)Georgi Gerganov
2024-05-23labeler.yml: add embedding label detector [no ci] (#7482)Brian
2024-05-23ggml : remove ggml_flash_attn and ggml_flash_ff (#7463)Georgi Gerganov
ggml-ci
2024-05-23ggml : drop support for QK_K=64 (#7473)Georgi Gerganov
* ggml : drop support for QK_K=64 ggml-ci * opencl : restore QK_K=256 define
2024-05-23Update vulkan rope implementation to support frequency factors (#7475)0cc4m
2024-05-23main : minor (#7462)Georgi Gerganov
2024-05-23CUDA: fix FA out-of-bounds reads (#7479)Johannes Gäßler
2024-05-23SimpleChat: a simple and dumb web front end for testing /chat/completions ↵HanishKVC
and /completions end points and try chat (#7350) * SimpleChat: Add a skeletal html page Contains a div placeholder for showing chat messages till now a text-input for allowing user to enter next chat message/query to the model. a submit button to allow sending of the user entered message and chat till now to the model. * SimpleChat: A js skeleton with SimpleChat class Allows maintaining an array of chat message. Allows adding chat message (from any of the roles be it system, user, assistant, ...) Allows showing chat messages till now, in a given div element. * SimpleChat: request_json, globals, startme * SimpleChatJS: Roles Class, submitClick Define Role class with static members corresponding to the roles. Update startme to * Get hold of the ui elements. * Attach a click handler to submit button, which adds the user input to xchats array and shows the chat messages till now in chat div element. Trap DOMContentLoaded to trigger startme * SimpleChat:HTML: Bring in the js file * SimpleChat: Rather value wrt input text element * SimpleChat: Also add completions related prompt * SimpleChat: Use common helper logic wrt json data * SimpleChat: Move handling of submit request into its own func * SimpleChat: Try handshake with llm over its web service endpoint * SimpleChat:JS: Extract model response and show to user * SimpleChat:JS: Messages/Prompt, indicate working to end user * SimpleChat: Try keep input element in view * SimpleChat: Diff user/assistant msgs, Make input wider Also show a default message to user Also add some metas * SimpleChat: Move into its own sub directory to avoid confusion * SimpleChat:sh: Add simple shell script to run python3 http.server So one needs to run the llm server locally then run this script and access it using a local browser * SimpleChat:JS: Try trap enter key press wrt input text field So user can either press submit button or press enter key * SimpleChat: Allow user to select chat or completion mode * SimpleChat: Dont submit if already submitted and waiting Also make chat the default selection wrt mode * SimpleChat:JS: Handle difference in response Try read the assistance response from appropriate field in the response got. Also examples/server seems to return the response in a slightly different field, so try account for that also. * SimpleChat:JS: Force completion mode be single message by default * SimpleChat: Add a simple readme file * SimpleChat:HTML: Cleanup/structure UI a bit, Add input for system * SimpleChat:Allow system prompt to be set, if provided before user * SimpleChat: Ignore empty user input, without trimming * SimpleChat:Alert user if they provide sysprompt late or change it * SimpleChat: Move handling systemprompt into its own func * SimpleChat:HTML: Add a style for system role message * SimpleChat: Update the readme file * SimpleChat:CSS: Move style info into its own css file To keep it simple, clean and seperate so that things are not unnecessarily cluttered. * SimpleChat:CSS: Allow for chat div to be scrollable * SimpleChat:JS: Try ensure the last entry in chat is visible Needed because now only the chat div is scrollable and not the full page. In last commit the chat div size was fixed to 75% vertical height, so the full page no longer scrolls, so the old bring user-input element to view wont work, instead now the last element in the chat div should be brought into view. * SimpleChat:JS: bottom of element visible, Set focus to user input As the generated text could be multiple lines and occupy more space that the full scrollable div's vertical space, make the bottom of the last element (which can be such a generated text) in the div visible by scrolling. Ensure that the user input box has focus * SimpleChat: Update notes a bit. Try keep browser happy Avoid browser quirk mode with DOCTYPE. Help with accessibility a bit by specifying the language explicitly. Specify the char encoding explicitly, inturn utf-8 is a safe bet, even with intermixing of languages if reqd in future. Add a cache-control http-equiv meta tag, which in all probability will be ignored. Defer js loading and execution, just for fun and future, not that critical here as it stands now. * SimpleChat:HTML:Group user input+btn together; Note about multichat * SimpleChat:JS: Allow for changing system prompt anytime for future * SimpleChat:Readme: Note about handle_systemprompt begin/anytime * SimpleChat:HTML: Add viewport meta for better mobile friendliness Without this the page content may look too small. * SimpleChat:HtmlCss: Cleanup UI flow set margin wrt vmin rather than vw or vh so portrait/landscape ok. Use flex and flex-grow to put things on the same line as well as distribute available space as needed. Given two main elements/line so it remains simple. In each line have one element with grows and one sits with a basic comfortably fixed size. * SimpleChat: textarea for multiline user chat, inturn shift+enter 4 enter * SimpleChat: Make vertical layout better responsive (flex based) Also needed to make things cleaner and properly usable whether landscape or portrait, after changing to multiline textarea rather than single line user input. Avoid hardcoding the chat-till-now display area height, instead make it a flex-growable within a flex column of ui elements within a fixed vertical area. * SimpleChat: Rename simplechat.html to index.html, update readme Instead of providing a seperate shell script, update the readme wrt how to run/use this web front end. * SimpleChat: Screen fixed view and scrolling, Printing full * SimpleChat:JS:CI: Avoid space at end of jsdoc param line * SimpleChat:JS: MultiChat initial skeleton Will help maintain multiple independent chats in future * SimpleChat:JS: Move system prompt begin/anytime into SimpleChat * SimpleChat:JS:Keep MultiChatUI simple for now Worry about different chats with different servers for later. * SimpleChat:JS: Move handle submit into MultiChat, build on same Create an instance of MultiChatUI and inturn a instance of chat session, which is what the UI will inturn work on. * SimpleChat:JS: Move to dictionary of SimpleChat, instead of array * SimpleChat: Move ui elements into MultiChatUI, Update el IDs Move ui elements into MultiChatUI, so that current handleUserSubmit doesnt need to take the element arguments. Also in future, when user is allowed to switch between different chat sessions, the UI can be updated as needed by using the elements in UI already known to MultiChatUI instance. Rename the element ids' so that they follow a common convention, as well as one can identify what the element represents in a more consistant manner. * SimpleChat:MCUI:Show available chat sessions, try switch btw them Previous commits brought in / consolidated existing logic into MultiChatUI class. Now start adding logic towards multichat support * show buttons indicating available chat sessions * on sessin button click, try switch to that session * SimpleChat:MCUI: Store and use current chat session id Also allow to switch chat session optionally, wrt some of the related helpers. setup for two chat sessions by default. * SimpleChat:MCUI: Delay enabling user-input to avoid race Re-enable user-input, only after response to a user query has been updated to the chat-div. This ensures that if user tries to switch chat session, it wont be allowed till chat-request-response flow is done. * SimpleChat: Take care of system prompt Helper to get the latest system prompt and inturn use same to set the system prompt ui, when switching. Ensure that system prompt is set if and when enter key is pressed. * SimpleChat:GetSystemLatest, fix a oversight. * SimpleChat:MCUI: Allow selected chat-session btn to be highlighted Also have a general helper for setting class of children. * SimpleChat:Cleanup corners Show system prompt in chat space, when it is set by pressing enter, as a feedback to user. Alert user, if they try to switch chat session in the middle of waiting for a response from the ai model. * SimpleChat:MCUI: Ensure req-resp failure doesnt lock up things * SimpleChat:MCUI: Support for new chat sessions Also a general create button helper. * SimpleChat:MCUI: CreateSessionBtn helper, use wrt NewChat Also fix a oversight wrt using stale data wrt the list of chat sessions. * SimpleChat:MCUI: NewChat btn first before existing chat sessions * SimpleChat:MCUI:CornerCases:Skip new chat, show only if current Skip NewChat if user cancels or if one waiting for response from the ai model. Dont show a chat with newly got ai model response, if current chat session has changed, some how. Chat session shouldnt be allowed to change, if there is a pending response, but still as a additional sanity check. * SimpleChat: Update readme, title, show usage if no chat to show * SimpleChat: Cleanup the log/dialog messages a bit
2024-05-22build : remove zig (#7471)Georgi Gerganov
2024-05-22common : normalize naming style (#7462)Georgi Gerganov
* common : normalize naming style ggml-ci * common : match declaration / definition order * zig : try to fix build
2024-05-22CUDA: fix FA out-of-bounds writes (#7465)Johannes Gäßler
2024-05-22phi3 : duplicate rope factors in each layer (#7447)slaren
* phi3 : duplicate rope factors in each layer phi3 : set phi-3 model type as 14B model loader : simplify the process for duplicating model tensors llama-bench : remove default pg test * replace bool parameters in llama_model_loader with named flags
2024-05-22vulkan: add workaround for iterator boundary check to fix clang-cl debug ↵k.h.lai
build (#7426)
2024-05-22llama : add missing model type names (#7445)Justine Tunney
2024-05-22cuda : fix compile warning (#7454)Georgi Gerganov
2024-05-22CUDA: remove incorrect precision check (#7454)Johannes Gäßler
2024-05-22cuda : fix rope + add tests (#7452)Georgi Gerganov
* cuda : fix rope pos data ggml-ci * ggml : drop mode & 1 == 1 support for ggml_rope ggml-ci * ggml : support freq_factors for f16 rope (CPU) ggml-ci * tests : add rope tests using frequency factors ggml-ci
2024-05-21llama : add phi3 128K model support (#7225)liuwei-git
* add phi3 128k support in convert-hf-to-gguf * add phi3 128k support in cuda * address build warnings on llama.cpp * adjust index value in cuda long rope freq factors * add long rope support in ggml cpu backend * make freq factors only depend on ctx size * remove unused rope scaling type 'su' frin gguf converter * fix flint warnings on convert-hf-to-gguf.py * set to the short freq factor when context size is small than trained context size * add one line of comments * metal : support rope freq_factors * ggml : update ggml_rope_ext API to support freq. factors * backends : add dev messages to support rope freq. factors * minor : style * tests : update to use new rope API * backends : fix pragma semicolons * minor : cleanup * llama : move rope factors from KV header to tensors * llama : remove tmp assert * cuda : fix compile warning * convert : read/write n_head_kv * llama : fix uninitialized tensors --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-05-21metal : handle F16 inf values, fix FA partial offload (#7434)Georgi Gerganov
ggml-ci
2024-05-21`grammars`: fix resampling logic regression (#7424)Olivier Chafik
2024-05-21CUDA: fix unused warning in mmq.cu (#7442)Johannes Gäßler
2024-05-21tests : test-tokenizer-0.sh print more info (#7402)Georgi Gerganov
2024-05-21examples: cache hf model when --model not provided (#7353)Amir
* examples: cache hf model when --model not provided * examples: cache hf model when --model not provided * examples: cache hf model when --model not provided * examples: cache hf model when --model not provided * examples: cache hf model when --model not provided
2024-05-21CUDA: deduplicate mmq code (#7397)Johannes Gäßler
2024-05-21Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) (#7425)jaime-m-p
* Update brute force test: add_special * Update brute force test: default values for add_bos_token and add_eos_token * Enable rtrim when pre-inserting BOS Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Revert "server : fix test regexes"
2024-05-20Tokenizer SPM fixes for phi-3 and llama-spm (#7375)jaime-m-p
* Update brute force test: special tokens * Fix added tokens - Try to read 'added_tokens.json'. - Try to read 'tokenizer_config.json'. - Try to read 'tokenizer.json'. * Fix special tokens rtrim Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * server : fix test regexes
2024-05-21llama : remove Persimmon (#7408)Georgi Gerganov
* llama : remove Persimmon * requirements : remove
2024-05-20perplexity: update README FP16 results [no ci] (#7413)Johannes Gäßler
2024-05-20rpc : track allocated buffers (#7411)Radoslav Gerganov
* rpc : track allocated buffers ref: #7407 * rpc : pack rpc_tensor tightly
2024-05-20server : fix temperature + disable some tests (#7409)Georgi Gerganov
* server : fix temperature * server : disable tests relying on parallel determinism * ci : change server Debug -> RelWithDebInfo
2024-05-20[SYCL] Update SYCL upscale operation (#7321)AidanBeltonS
* Update SYCL upscale operation * Formatting * Remove messages
2024-05-20Update README.md (#7410)Bingan
2024-05-20ggml-opencl, llama: using reserve() if count already known (#7272)Herman Semenov