summaryrefslogtreecommitdiff
path: root/llama.cpp
AgeCommit message (Expand)Author
2023-08-22Improve handling of special tokens in GGML to GGUF converter (#2725)Kerfuffle
2023-08-23llama : fix whitespace escaping in tokenizer (#2724)goerch
2023-08-22gguf : add ftype meta info to the model (#2710)Georgi Gerganov
2023-08-22Quantization imrovements for k_quants (#2707)Kawrakow
2023-08-22ggml-cuda : use graph allocator (#2684)slaren
2023-08-21gguf : new file format with flexible meta data (beta) (#2398)Georgi Gerganov
2023-08-18llama : add benchmark example (#2626)slaren
2023-08-17Fix unicode in grammars (fixes #2501) (#2553)Evan Jones
2023-08-17llama : replace (permute + reshape + view_1d) with (view_3d) (#2538)Georgi Gerganov
2023-08-16metal : enable ggml-alloc (#2627)Shouzheng Liu
2023-08-16metal : matrix-matrix multiplication kernel (#2615)Shouzheng Liu
2023-08-14metal : return null instead of exit(1) (#2573)Jhen-Jie Hong
2023-08-09add log_callback to llama_context_params for custom logging. (#2234)grahameth
2023-08-08CUDA: tighter VRAM scratch size for 65b/70b (#2551)Johannes Gäßler
2023-08-07Fixed mmap prefetch for GPU offloading (#2529)Johannes Gäßler
2023-08-04Stream save llama context data to file instead of allocating entire buffer up...l3utterfly
2023-07-31CUDA: mmq CLI option, fixed mmq build issues (#2453)Johannes Gäßler
2023-07-31Fix Metal backend broken from the allocator changes (#2455)slaren
2023-07-30ggml : add graph tensor allocator (#2411)slaren
2023-07-28llama : support more diverse tokenizers? (#2420)eric8607242
2023-07-28llama : use n_embd_gqa instead of n_embd to handle llama-2 70B (#2433)Rand Xie
2023-07-27metal : disable graph concurrency optimization due to bug (#2413)Georgi Gerganov
2023-07-26ggml : allocate graphs in a context (#2392)slaren
2023-07-25Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384)Kawrakow
2023-07-25ggml : improve graph build time via hash table lookup (#2329)slaren
2023-07-25metal : concurrently dispatch commands (#2358)Shouzheng Liu
2023-07-24make rms_norm_eps a parameter (#2374)slaren
2023-07-23llama : add grammar-based sampling (#1773)Evan Jones
2023-07-23llama : grouped-query attention + LLaMAv2 70B support (#2276)Georgi Gerganov
2023-07-23llama : print max tensor size to stderr (#2336)Christian Demsar
2023-07-22llama : optimize memory buffers (#2325)Georgi Gerganov
2023-07-21ggml : fix rope args order + assert (#2054)Georgi Gerganov
2023-07-21llama : remove cfg smooth factor as it is only a reparameterization of the gu...Guillaume "Vermeille" Sanchez
2023-07-21llama : make tensor_split ptr instead of array (#2272)Georgi Gerganov
2023-07-20llama : fix regression from #2000 - could not load no-mmap modelsGeorgi Gerganov
2023-07-19llama : extend API to get max devices at runtime (#2253)Rinne
2023-07-18ci : integrate with ggml-org/ci (#2250)Georgi Gerganov
2023-07-17llama : fix t_start_sample_us initialization warning (#2238)Alex Klinkhamer
2023-07-15llama : add custom RoPE (#2054)Xiao-Yong Jin
2023-07-14llama : add functions that work directly on model (#2197)Bach Le
2023-07-11llama : add classifier-free guidance (#2135)Bach Le
2023-07-11Possible solution to allow K-quants on models with n_vocab!=32000 (#2148)LostRuins
2023-07-10mpi : add support for distributed inference via MPI (#2099)Evan Miller
2023-07-09llama : remove "first token must be BOS" restriction (#2153)oobabooga
2023-07-07ggml : change ggml_graph_compute() API to not require context (#1999)Qingyou Meng
2023-07-05Expose generation timings from server & update completions.js (#2116)Tobias Lütke
2023-07-05ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)Stephan Walter
2023-07-05llama: Don't double count the sampling time (#2107)Howard Su
2023-07-05Fixed OpenCL offloading prints (#2082)Johannes Gäßler
2023-07-03Fix crash of test-tokenizer-0 under Debug build (#2064)Howard Su