summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-01-08common : fix the short form of `--grp-attn-w`, not `-gat` (#4825)howlger
See https://github.com/ggerganov/llama.cpp/blob/master/common/common.cpp#L230C53-L230C57
2024-01-08readme : add link to SOTA modelsGeorgi Gerganov
2024-01-08SOTA 2-bit quants (#4773)Kawrakow
* iq2_xxs: basics * iq2_xxs: scalar and AVX2 dot products Needed to change Q8_K to have quants in the -127...127 range, else the IQ2_XXS AVX implementation becomes very awkward. The alternative would have been to use Q8_0 instead. Perhaps I'll change later, for now this is what we have. * iq2_xxs: ARM_NEON dot product Somehow strangely slow (112 ms/token). * iq2_xxs: WIP Metal Dequantize works, something is still wrong with the dot product. * iq2_xxs: Metal dot product now works We have PP-512 = 475 t/s TG-128 = 47.3 t/s Not the greatest performance, but not complete garbage either. * iq2_xxs: slighty faster dot product TG-128 is now 48.4 t/s * iq2_xxs: slighty faster dot product TG-128 is now 50.9 t/s * iq2_xxs: even faster Metal dot product TG-128 is now 54.1 t/s. Strangely enough, putting the signs lookup table into shared memory has a bigger impact than the grid values being in shared memory. * iq2_xxs: dequantize CUDA kernel - fix conflict with master * iq2_xxs: quantized CUDA dot product (MMVQ) We get TG-128 = 153.1 t/s * iq2_xxs: slightly faster CUDA dot product TG-128 is now at 155.1 t/s. * iq2_xxs: add to llama ftype enum * iq2_xxs: fix MoE on Metal * Fix missing MMQ ops when on hipBLAS I had put the ggml_supports_mmq call at the wrong place. * Fix bug in qequantize_row_iq2_xxs The 0.25f factor was missing. Great detective work by @ggerganov! * Fixing tests * PR suggestion --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-01-08swift : exclude ggml-metal.metal from the package (#4822)Georgi Gerganov
2024-01-08llama.swiftui : update readmeGeorgi Gerganov
2024-01-08main : add self-extend support (#4815)Georgi Gerganov
* examples : add passkey test * passkey : better prints * passkey : select pass key pos from CLI * passkey : simplify n_past logic * llama : "self-extend"-like context extension * passkey : add comment * main : add Self-Extend support * llama : add comment about llama_kv_cache_seq_div
2024-01-08examples : add passkey test (#3856)Georgi Gerganov
* examples : add passkey test * passkey : better prints * passkey : select pass key pos from CLI * passkey : simplify n_past logic * make : add passkey target * passkey : add "self-extend"-like context extension (#4810) * llama : "self-extend"-like context extension * passkey : add comment * passkey : add readme
2024-01-07readme : add lgrammel/modelfusion JS/TS client for llama.cpp (#4814)Lars Grammel
2024-01-07llama-bench : add no-kv-offload parameter (#4812)slaren
2024-01-07CUDA: fixed redundant value dequantization (#4809)Johannes Gäßler
2024-01-07llama : remove unused vars (#4796)Georgi Gerganov
2024-01-07llama : remove redundant GQA check (#4796)Georgi Gerganov
2024-01-07llama.swiftui : use llama.cpp as SPM package (#4804)Alex Azarov
2024-01-07llama : print tensor meta for debuggingGeorgi Gerganov
2024-01-07llama.swiftui : add visionOS target (#4805)Alex Azarov
2024-01-07ggml : use __builtin_amdgcn_sudot4 in __dp4a for gfx11 (#4787)Konstantin Zhuravlyov
2024-01-07server : fix n_predict check (#4798)Georgi Gerganov
2024-01-06llama.swiftui : use correct pointer for llama_token_eos (#4797)Daniel Illescas Romero
2024-01-06examples : improve base-translate.sh script (#4783)Georgi Gerganov
2024-01-05cmake : check for openblas64 (#4134)a-n-n-a-l-e-e
openblas v0.3.22 64-bit pkg-config file is named openblas64.pc https://github.com/OpenMathLib/OpenBLAS/issues/3790
2024-01-05flake.nix : fix typo (#4700)Ikko Eltociear Ashimine
betwen -> between
2024-01-05metal : switch back to default.metallib (ggml/681)Georgi Gerganov
ggml-ci
2024-01-05ggml : fix q2_k bpw in comments (ggml/680)Georgi Gerganov
2024-01-05ggml : add error handling to graph_compute (whisper/1714)Finn Voorhees
2024-01-05ggml : do not sched_yield when calling BLAS (#4761)Georgi Gerganov
* ggml : do not sched_yield when calling BLAS ggml-ci * ggml : fix do_yield logic ggml-ci * ggml : simplify do_yield logic ggml-ci
2024-01-05examples : add few-shot translation example (#4783)Georgi Gerganov
2024-01-04finetune : remove unused includes (#4756)Daniel Bevenius
This commit removes unused includes from finetune.cpp. Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-01-04server : send token probs for "stream == false" (#4714)Georgi Gerganov
2024-01-04Print backend name on test-backend-ops failure (#4751)Johannes Gäßler
2024-01-04llama.swiftui : support loading custom model from file picker (#4767)singularity
* swiftui: support load model from file picker * swiftui: remove trailing whitespace
2024-01-04server : fix options in README.md (#4765)Michael Coppola
* fix examples/server/README.md * minor : fix whitespace --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-01-04ggml : include stdlib.h before intrin.h (#4736)Georgi Gerganov
2024-01-04llama.swiftui : fix build of ggml.metallib (#4754)singularity
* metal: fix metal backend init failure in swiftui * metal: build ggml.metallib instead of copy src * llama.swift : remove debug flags from metallib build --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-01-03train : fix typo in overlapping-samples help msg (#4758)Daniel Bevenius
This commit fixes a typo in the help message for the --overlapping-samples option. Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-01-03swift : update Package.swift to use ggml as dependency (#4691)Ashraful Islam
* updates the package.swift to use ggml as dependency * changes the ggml package url src to ggerganov
2024-01-03cuda : simplify expressionGeorgi Gerganov
Co-authored-by: slaren <slarengh@gmail.com>
2024-01-03cuda : mark I16 and I32 ops as unsupportedGeorgi Gerganov
ggml-ci
2024-01-03sync : ggmlGeorgi Gerganov
ggml-ci
2024-01-03metal : add kernel_get_rows_i32Georgi Gerganov
ggml-ci
2024-01-03scripts : fix sync order + metal sedGeorgi Gerganov
2024-01-03ggml : extend ggml_get_rows, ggml_repeat, ggml_concat (ggml/639)Guillaume Wenzek
* add more int ops * ggml_compute_forward_dup_bytes * add tests * PR comments * tests : minor indentations --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-01-03server : throw an error when `slot unavailable` (#4741)Justin Parker
2024-01-02metal : optimize ggml_mul_mat_id (faster Mixtral PP) (#4725)Georgi Gerganov
* ggml : disable fast-math for Metal (cmake build only) ggml-ci * metal : fix Metal API debug warnings * cmake : add -fno-inline for Metal build (#4545) * metal : fix API debug warnings * metal : fix compile warnings * metal : use uint64_t for strides * cmake : rename option to LLAMA_METAL_SHADER_DEBUG * metal : fix mat-vec Q8_0 kernel for BS > 1 * metal : normalize mat-vec kernel signatures * cmake : respect LLAMA_QKK_64 option * metal : fix mat-vec Q4_K kernel for QK_K == 64 * metal : optimizing ggml_mul_mat_id (wip) * metal : minor fix * metal : opt mul_mm_id
2024-01-02server : add token counts to html footer (#4738)Phil H
* server: add token counts to stats * server: generate hpp --------- Co-authored-by: phiharri <ph@got-root.co.uk>
2024-01-02llama : llama_model_desc print number of expertsGeorgi Gerganov
2024-01-02llama : replace all API facing `int`'s with `int32_t` (#4577)Marcus Dunn
* replaced all API facing `int`'s with `int32_t` * formatting and missed `int` in `llama_token_to_piece`
2024-01-02llama : differentiate the KV dims in the attention (#4657)postmasters
* Add n_key_dim and n_value_dim Some models use values that are not derived from `n_embd`. Also remove `n_embd_head` and `n_embd_gqa` because it is not clear which "head" is referred to (key or value). Fix issue #4648. * Fix `llm_build_kqv` to use `n_value_gqa` * Rebase * Rename variables * Fix llm_build_kqv to be more generic wrt n_embd_head_k * Update default values for n_embd_head_k and n_embd_head_v Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Fix llm_load_tensors: the asserts were not backcompat --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-01-02editorconfig : fix whitespace and indentation #4710Georgi Gerganov
2024-01-02server : add --override-kv parameter (#4710)minarchist
* Changes to server to allow metadata override * documentation * flake.nix: expose full scope in legacyPackages * flake.nix: rocm not yet supported on aarch64, so hide the output * flake.nix: expose checks * workflows: nix-ci: init; build flake outputs * workflows: nix-ci: add a job for eval * workflows: weekly `nix flake update` * workflows: nix-flakestry: drop tag filters ...and add a job for flakehub.com * workflows: nix-ci: add a qemu job for jetsons * flake.nix: suggest the binary caches * flake.lock: update to a commit recently cached by nixpkgs-cuda-ci --------- Co-authored-by: John <john@jLap.lan> Co-authored-by: Someone Serge <sergei.kozlukov@aalto.fi>
2024-01-02py : re-enable mmap in convert hf (#4732)Nam D. Tran
* update: awq support llama-7b model * update: change order * update: benchmark results for llama2-7b * update: mistral 7b v1 benchmark * update: support 4 models * fix: Readme * update: ready for PR * update: readme * fix: readme * update: change order import * black * format code * update: work for bot mpt and awqmpt * update: readme * Rename to llm_build_ffn_mpt_awq * Formatted other files * Fixed params count * fix: remove code * update: more detail for mpt * fix: readme * fix: readme * update: change folder architecture * fix: common.cpp * fix: readme * fix: remove ggml_repeat * update: cicd * update: cicd * uppdate: remove use_awq arg * update: readme * llama : adapt plamo to new ffn ggml-ci * fix: update torch version --------- Co-authored-by: Trần Đức Nam <v.namtd12@vinai.io> Co-authored-by: Le Hoang Anh <v.anhlh33@vinai.io> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>