summaryrefslogtreecommitdiff
path: root/llama.cpp
AgeCommit message (Expand)Author
2023-08-2910X faster BPE tokenizer (#2876)Kawrakow
2023-08-28train : mem usage and other improvements (#2439)xaedes
2023-08-28YAML result logging + preset script (#2657)Johannes Gäßler
2023-08-28llama.cpp : fix wrong vsnprintf call in MS compiler (#2856)grahameth
2023-08-27llama : fix MPI threads (close #2827)Georgi Gerganov
2023-08-27llama : speedup tokenization (#2831)Kawrakow
2023-08-27falcon : fix CUDA inference by making K and Q contiguous (#2830)Georgi Gerganov
2023-08-27k_quants tuning for Falcon-7b (#2816)Kawrakow
2023-08-27gguf : add 64-bit support (GGUF v2) (#2821)Georgi Gerganov
2023-08-27llama : more tokenizer fixes (#2810)Georgi Gerganov
2023-08-27ggml : detect SSSE3 (#2825)Przemysław Pawełczyk
2023-08-26llama : use Unicode Escape Sequence to replace encoded characters (#2814)Tim Miller
2023-08-26llama : move #includes out of _GNU_SOURCE conditional (#2817)Cebtenzzre
2023-08-26llama : use std::abs in llama_sample_tail_free (#2800)Cebtenzzre
2023-08-26k-quants : remove unnecessary tensor shape restrictions (#2811)Georgi Gerganov
2023-08-26Better perplexity for 2- and 3-bit quantization for LLaMA-v2-70B (#2807)Kawrakow
2023-08-26Fix spm whitespaces (#2806)klosax
2023-08-25llama : add llama_beam_search() (#2267)Matt Pulver
2023-08-25llama-bench : add model sizes (#2771)slaren
2023-08-25ROCm Port (#1087)Henri Vasserman
2023-08-25cuda : add RoPE kernel for mode == 2 (NeoX) (#2760)Georgi Gerganov
2023-08-24gguf : add rope_freq_base parameter for CodeLlama (#2769)slaren
2023-08-24metal : bug-fix when enable ggml-alloc (#2757)Shouzheng Liu
2023-08-24fix convert.py for codellama, add llama 34B to the list of recognized models ...slaren
2023-08-24llama : escape all U+2581 in a string (#2750)Georgi Gerganov
2023-08-24llama : fix grammar sometimes generating null char (#2756)Evan Jones
2023-08-23llm : add Falcon support (#2717)Georgi Gerganov
2023-08-22Improve handling of special tokens in GGML to GGUF converter (#2725)Kerfuffle
2023-08-23llama : fix whitespace escaping in tokenizer (#2724)goerch
2023-08-22gguf : add ftype meta info to the model (#2710)Georgi Gerganov
2023-08-22Quantization imrovements for k_quants (#2707)Kawrakow
2023-08-22ggml-cuda : use graph allocator (#2684)slaren
2023-08-21gguf : new file format with flexible meta data (beta) (#2398)Georgi Gerganov
2023-08-18llama : add benchmark example (#2626)slaren
2023-08-17Fix unicode in grammars (fixes #2501) (#2553)Evan Jones
2023-08-17llama : replace (permute + reshape + view_1d) with (view_3d) (#2538)Georgi Gerganov
2023-08-16metal : enable ggml-alloc (#2627)Shouzheng Liu
2023-08-16metal : matrix-matrix multiplication kernel (#2615)Shouzheng Liu
2023-08-14metal : return null instead of exit(1) (#2573)Jhen-Jie Hong
2023-08-09add log_callback to llama_context_params for custom logging. (#2234)grahameth
2023-08-08CUDA: tighter VRAM scratch size for 65b/70b (#2551)Johannes Gäßler
2023-08-07Fixed mmap prefetch for GPU offloading (#2529)Johannes Gäßler
2023-08-04Stream save llama context data to file instead of allocating entire buffer up...l3utterfly
2023-07-31CUDA: mmq CLI option, fixed mmq build issues (#2453)Johannes Gäßler
2023-07-31Fix Metal backend broken from the allocator changes (#2455)slaren
2023-07-30ggml : add graph tensor allocator (#2411)slaren
2023-07-28llama : support more diverse tokenizers? (#2420)eric8607242
2023-07-28llama : use n_embd_gqa instead of n_embd to handle llama-2 70B (#2433)Rand Xie
2023-07-27metal : disable graph concurrency optimization due to bug (#2413)Georgi Gerganov
2023-07-26ggml : allocate graphs in a context (#2392)slaren