index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
llama.cpp
Age
Commit message (
Expand
)
Author
2023-08-22
Improve handling of special tokens in GGML to GGUF converter (#2725)
Kerfuffle
2023-08-23
llama : fix whitespace escaping in tokenizer (#2724)
goerch
2023-08-22
gguf : add ftype meta info to the model (#2710)
Georgi Gerganov
2023-08-22
Quantization imrovements for k_quants (#2707)
Kawrakow
2023-08-22
ggml-cuda : use graph allocator (#2684)
slaren
2023-08-21
gguf : new file format with flexible meta data (beta) (#2398)
Georgi Gerganov
2023-08-18
llama : add benchmark example (#2626)
slaren
2023-08-17
Fix unicode in grammars (fixes #2501) (#2553)
Evan Jones
2023-08-17
llama : replace (permute + reshape + view_1d) with (view_3d) (#2538)
Georgi Gerganov
2023-08-16
metal : enable ggml-alloc (#2627)
Shouzheng Liu
2023-08-16
metal : matrix-matrix multiplication kernel (#2615)
Shouzheng Liu
2023-08-14
metal : return null instead of exit(1) (#2573)
Jhen-Jie Hong
2023-08-09
add log_callback to llama_context_params for custom logging. (#2234)
grahameth
2023-08-08
CUDA: tighter VRAM scratch size for 65b/70b (#2551)
Johannes Gäßler
2023-08-07
Fixed mmap prefetch for GPU offloading (#2529)
Johannes Gäßler
2023-08-04
Stream save llama context data to file instead of allocating entire buffer up...
l3utterfly
2023-07-31
CUDA: mmq CLI option, fixed mmq build issues (#2453)
Johannes Gäßler
2023-07-31
Fix Metal backend broken from the allocator changes (#2455)
slaren
2023-07-30
ggml : add graph tensor allocator (#2411)
slaren
2023-07-28
llama : support more diverse tokenizers? (#2420)
eric8607242
2023-07-28
llama : use n_embd_gqa instead of n_embd to handle llama-2 70B (#2433)
Rand Xie
2023-07-27
metal : disable graph concurrency optimization due to bug (#2413)
Georgi Gerganov
2023-07-26
ggml : allocate graphs in a context (#2392)
slaren
2023-07-25
Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384)
Kawrakow
2023-07-25
ggml : improve graph build time via hash table lookup (#2329)
slaren
2023-07-25
metal : concurrently dispatch commands (#2358)
Shouzheng Liu
2023-07-24
make rms_norm_eps a parameter (#2374)
slaren
2023-07-23
llama : add grammar-based sampling (#1773)
Evan Jones
2023-07-23
llama : grouped-query attention + LLaMAv2 70B support (#2276)
Georgi Gerganov
2023-07-23
llama : print max tensor size to stderr (#2336)
Christian Demsar
2023-07-22
llama : optimize memory buffers (#2325)
Georgi Gerganov
2023-07-21
ggml : fix rope args order + assert (#2054)
Georgi Gerganov
2023-07-21
llama : remove cfg smooth factor as it is only a reparameterization of the gu...
Guillaume "Vermeille" Sanchez
2023-07-21
llama : make tensor_split ptr instead of array (#2272)
Georgi Gerganov
2023-07-20
llama : fix regression from #2000 - could not load no-mmap models
Georgi Gerganov
2023-07-19
llama : extend API to get max devices at runtime (#2253)
Rinne
2023-07-18
ci : integrate with ggml-org/ci (#2250)
Georgi Gerganov
2023-07-17
llama : fix t_start_sample_us initialization warning (#2238)
Alex Klinkhamer
2023-07-15
llama : add custom RoPE (#2054)
Xiao-Yong Jin
2023-07-14
llama : add functions that work directly on model (#2197)
Bach Le
2023-07-11
llama : add classifier-free guidance (#2135)
Bach Le
2023-07-11
Possible solution to allow K-quants on models with n_vocab!=32000 (#2148)
LostRuins
2023-07-10
mpi : add support for distributed inference via MPI (#2099)
Evan Miller
2023-07-09
llama : remove "first token must be BOS" restriction (#2153)
oobabooga
2023-07-07
ggml : change ggml_graph_compute() API to not require context (#1999)
Qingyou Meng
2023-07-05
Expose generation timings from server & update completions.js (#2116)
Tobias Lütke
2023-07-05
ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)
Stephan Walter
2023-07-05
llama: Don't double count the sampling time (#2107)
Howard Su
2023-07-05
Fixed OpenCL offloading prints (#2082)
Johannes Gäßler
2023-07-03
Fix crash of test-tokenizer-0 under Debug build (#2064)
Howard Su
[next]