summaryrefslogtreecommitdiff
path: root/llama.h
AgeCommit message (Expand)Author
2023-09-01Allow quantize to only copy tensors, some other improvements (#2931)Kerfuffle
2023-08-29added `struct` to llama_dump_timing_info_yaml's `llama_context` (#2857)Marcus Dunn
2023-08-28YAML result logging + preset script (#2657)Johannes Gäßler
2023-08-28llama.h : add missing struct keyword for C compat in callback type (#2847)igarnier
2023-08-27llama : more tokenizer fixes (#2810)Georgi Gerganov
2023-08-25llama : fix struct decl (#2790)Marcus Dunn
2023-08-25llama : add llama_beam_search() (#2267)Matt Pulver
2023-08-25llama-bench : add model sizes (#2771)slaren
2023-08-24Added `enum` to `llama_token_get_type` return type (#2774)Marcus Dunn
2023-08-23llm : add Falcon support (#2717)Georgi Gerganov
2023-08-22gguf : add ftype meta info to the model (#2710)Georgi Gerganov
2023-08-21gguf : new file format with flexible meta data (beta) (#2398)Georgi Gerganov
2023-08-18llama : add benchmark example (#2626)slaren
2023-08-14llama : add missing enum keyword in function signatures (#2610)Kamil Tomšík
2023-08-09add log_callback to llama_context_params for custom logging. (#2234)grahameth
2023-07-31CUDA: mmq CLI option, fixed mmq build issues (#2453)Johannes Gäßler
2023-07-25Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384)Kawrakow
2023-07-24make rms_norm_eps a parameter (#2374)slaren
2023-07-23llama : add grammar-based sampling (#1773)Evan Jones
2023-07-23llama : grouped-query attention + LLaMAv2 70B support (#2276)Georgi Gerganov
2023-07-21llama : remove cfg smooth factor as it is only a reparameterization of the gu...Guillaume "Vermeille" Sanchez
2023-07-21llama : make tensor_split ptr instead of array (#2272)Georgi Gerganov
2023-07-19llama : extend API to get max devices at runtime (#2253)Rinne
2023-07-15llama : add custom RoPE (#2054)Xiao-Yong Jin
2023-07-14llama : add functions that work directly on model (#2197)Bach Le
2023-07-11llama : add classifier-free guidance (#2135)Bach Le
2023-07-10mpi : add support for distributed inference via MPI (#2099)Evan Miller
2023-07-05Expose generation timings from server & update completions.js (#2116)Tobias Lütke
2023-06-29Use unsigned for random seed (#2006)Howard Su
2023-06-28llama : support input embeddings directly (#1910)ningshanwutuobang
2023-06-26ggml : add NUMA support (#1556)zrm
2023-06-24llama : make model stateless and context stateful (llama_state) (#1797)Didzis Gosko
2023-06-20llama : fix params struct slignment (#1936)Ettore Di Giacinto
2023-06-15examples : add chat-vicuna.sh (#1854)yangli2
2023-06-14CUDA full GPU acceleration, KV cache in VRAM (#1827)Johannes Gäßler
2023-06-13train : improved training-from-scratch example (#1652)xaedes
2023-06-10llama : support requantizing models instead of only allowing quantization fro...Kerfuffle
2023-06-06Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)Johannes Gäßler
2023-06-05ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)Kawrakow
2023-06-04llama : Metal inference (#1642)Georgi Gerganov
2023-05-28Only show -ngl option when relevant + other doc/arg handling updates (#1625)Kerfuffle
2023-05-20llama : define magic numbers as integer constants (#1518) (#1520)Juuso Alasuutari
2023-05-20llama : add llama_init_backend() API (close #1527)Georgi Gerganov
2023-05-20llama : fix compile warnings in llama_set_state_data()Georgi Gerganov
2023-05-19ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)Georgi Gerganov
2023-05-17Remove unused n_parts parameter (#1509)Stephan Walter
2023-05-13ggml : GPU-accelerated token generation (#1412)Johannes Gäßler
2023-05-13llama : free ggml context in set / copy state data (close #1425)Georgi Gerganov
2023-05-12ggml : remove bit shuffling (#1405)Georgi Gerganov
2023-05-06Remove default arguments from sampling functions (#1343)Jed Fox