summaryrefslogtreecommitdiff
path: root/llama.cpp
AgeCommit message (Expand)Author
2023-11-10Unbreak persimmon after #3837 (#4010)Galunid
2023-11-07cuda : supports running on CPU for GGML_USE_CUBLAS=ON build (#3946)Meng Zhang
2023-11-05llama : mark LLM_ARCH_STARCODER as full offload supported (#3945)Meng Zhang
2023-11-03llama : change yarn_ext_factor placeholder to -1 (#3922)cebtenzzre
2023-11-02llm : prevent from 1-D tensors being GPU split (#3697)Georgi Gerganov
2023-11-01llama : fix llama_context_default_params after #2268 (#3893)cebtenzzre
2023-11-01llama : implement YaRN RoPE scaling (#2268)cebtenzzre
2023-11-01llm : fix llm_build_kqv taking unused tensor (benign, #3837)Georgi Gerganov
2023-11-01llm : fix falcon norm after refactoring (#3837)Georgi Gerganov
2023-11-01llm : add llm_build_context (#3881)Georgi Gerganov
2023-11-01finetune : add -ngl parameter (#3762)Andrew Godfrey
2023-11-01llama : refactor graph build code (#3837)Georgi Gerganov
2023-10-31samplers : Min-P sampler implementation [alternative to Top P/Top K] (#3841)kalomaze
2023-10-30ggml : move FP16 <-> FP32 code to ggml-impl.h (#3861)Georgi Gerganov
2023-10-29Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)Kerfuffle
2023-10-29llama : fix kv shift bug (#3835)Georgi Gerganov
2023-10-29ggml : quantization refactoring (#3833)Georgi Gerganov
2023-10-28llama : allow quantizing k-quants to fall back when tensor size incompatible ...Kerfuffle
2023-10-28starcoder : add GPU offloading (#3827)Georgi Gerganov
2023-10-27llama : correctly report GGUFv3 format (#3818)cebtenzzre
2023-10-27cuda : improve text-generation and batched decoding performance (#3776)Georgi Gerganov
2023-10-23llama : remove token functions with `context` args in favor of `model` (#3720)Marcus Dunn
2023-10-22Add test for MPT tokenization (#3728)goerch
2023-10-22llama : validate special token ids are in range when loading GGUF model (#3635)Kerfuffle
2023-10-20sampling : refactor init to use llama_sampling_params (#3696)Georgi Gerganov
2023-10-20ggml : fix rope + llama minor optimizations (#3560)Herman Semenov
2023-10-18speculative : add tree-based sampling example (#3624)Georgi Gerganov
2023-10-17fix embeddings when using CUDA (#3657)slaren
2023-10-17llama : avoid fprintf in favor of LLAMA_LOG (#3538)Georgi Gerganov
2023-10-17tokenizer : special token handling (#3538)staviq
2023-10-15MPT : support GQA for replit-code-v1.5 (#3627)cebtenzzre
2023-10-13llama : remove n_threads from llama_decode_internal (#3614)Daniel Bevenius
2023-10-10Minor improvements in GPT2 tokenizer (#3567)goerch
2023-10-10llm : add bloom models (#3553)Xingchen Song(宋星辰)
2023-10-10llm : add MPT support (#3417)Jan Ploski
2023-10-09refact : fix convert script + zero out KV cache to avoid nans (#3523)Georgi Gerganov
2023-10-08sync : ggml (ggml-backend) (#3548)Georgi Gerganov
2023-10-08llama : fix missing break in Persimmon arch case statements (#3535)Kerfuffle
2023-10-07quantize : fail fast on write errors (#3521)cebtenzzre
2023-10-07llm : support Adept Persimmon 8B (#3410)Phillip Kravtsov
2023-10-07Fix for #3454 (#3455)goerch
2023-10-06kv cache slot search improvements (#3493)Kerfuffle
2023-10-06parallel : add option to load external prompt file (#3416)pudepiedj
2023-10-06llama : correct hparams comparison (#3446)l3utterfly
2023-10-04llm : add Refact model (#3329)ds5t5
2023-10-03llama : fix session saving/loading (#3400)Georgi Gerganov
2023-10-03llama : expose model's rope_freq_scale in the API (#3418)Alex Klinkhamer
2023-10-03Work on the BPE tokenizer (#3252)goerch
2023-10-02metal : set log callback before initializing (#3427)Adrian
2023-10-02infill : add new example + extend server API (#3296)vvhg1