index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
llama.h
Age
Commit message (
Expand
)
Author
2024-01-15
llama : apply classifier-free guidance to logits directly (#4951)
David Friehs
2024-01-14
2-bit quantizations (#4897)
Kawrakow
2024-01-13
llama : minimize size used for state save/load (#4820)
David Friehs
2024-01-12
llama : ggml-backend integration (#4766)
slaren
2024-01-11
llama : restore intended k-quants mixes for MoE models (#4872)
Kawrakow
2024-01-11
ggml : SOTA 2-bit quants (add IQ2_XS) (#4856)
Kawrakow
2024-01-08
SOTA 2-bit quants (#4773)
Kawrakow
2024-01-08
main : add self-extend support (#4815)
Georgi Gerganov
2024-01-08
examples : add passkey test (#3856)
Georgi Gerganov
2024-01-02
llama : replace all API facing `int`'s with `int32_t` (#4577)
Marcus Dunn
2023-12-22
llama : add ability to cancel model loading (#4462)
crasm
2023-12-21
llama : allow getting n_batch from llama_context in c api (#4540)
Marcus Dunn
2023-12-16
lora : add support for non-llama models (#3333)
slaren
2023-12-12
llama : document logits_all deprecation (#4418)
crasm
2023-12-07
llama : per-layer KV cache + quantum K cache (#4309)
Georgi Gerganov
2023-12-05
llama : allow overriding GGUF metadata when loading model (#4092)
Kerfuffle
2023-11-25
Update docs for yarn_ext_factor <0.0 as unspecified instead of NaN (#4189)
crasm
2023-11-23
llama : KV cache view API + better KV cache management (#4170)
Georgi Gerganov
2023-11-17
llama : add functions to get the model's metadata (#4013)
slaren
2023-11-16
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
Kerfuffle
2023-11-03
common : YAYF (yet another YARN fix) (#3925)
Georgi Gerganov
2023-11-01
llama : implement YaRN RoPE scaling (#2268)
cebtenzzre
2023-10-31
samplers : Min-P sampler implementation [alternative to Top P/Top K] (#3841)
kalomaze
2023-10-29
Extend llama_kv_cache_seq_rm to allow matching any sequence (#3843)
Kerfuffle
2023-10-29
ggml : quantization refactoring (#3833)
Georgi Gerganov
2023-10-28
llama : add option for greedy sampling with probs (#3813)
Georgi Gerganov
2023-10-27
cuda : improve text-generation and batched decoding performance (#3776)
Georgi Gerganov
2023-10-23
llama : remove token functions with `context` args in favor of `model` (#3720)
Marcus Dunn
2023-10-20
sampling : refactor init to use llama_sampling_params (#3696)
Georgi Gerganov
2023-10-18
speculative : add tree-based sampling example (#3624)
Georgi Gerganov
2023-10-17
tokenizer : special token handling (#3538)
staviq
2023-10-03
llama : fix session saving/loading (#3400)
Georgi Gerganov
2023-10-03
llama : expose model's rope_freq_scale in the API (#3418)
Alex Klinkhamer
2023-10-02
infill : add new example + extend server API (#3296)
vvhg1
2023-09-29
llama.cpp : add documentation about rope_freq_base and scale values (#3401)
slaren
2023-09-28
llama.cpp : split llama_context_params into model and context params (#3301)
slaren
2023-09-28
train : finetune LORA (#2632)
xaedes
2023-09-28
llama : custom attention mask + parallel decoding + no context swaps (#3228)
Georgi Gerganov
2023-09-27
metal : reusing llama.cpp logging (#3152)
Rickard Hallerbäck
2023-09-16
Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 (...
goerch
2023-09-15
check C++ code with -Wmissing-declarations (#3184)
Cebtenzzre
2023-09-08
examples : make n_ctx warning work again (#3066)
Cebtenzzre
2023-09-05
speculative : add grammar support (#2991)
Georgi Gerganov
2023-09-01
Allow quantize to only copy tensors, some other improvements (#2931)
Kerfuffle
2023-08-29
added `struct` to llama_dump_timing_info_yaml's `llama_context` (#2857)
Marcus Dunn
2023-08-28
YAML result logging + preset script (#2657)
Johannes Gäßler
2023-08-28
llama.h : add missing struct keyword for C compat in callback type (#2847)
igarnier
2023-08-27
llama : more tokenizer fixes (#2810)
Georgi Gerganov
2023-08-25
llama : fix struct decl (#2790)
Marcus Dunn
2023-08-25
llama : add llama_beam_search() (#2267)
Matt Pulver
[next]