index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
gguf-py
Age
Commit message (
Expand
)
Author
2024-03-15
llama : add Command-R support (#6033)
Andrew Canis
2024-03-15
gguf : add support for I64 and F64 arrays (#6062)
Ondřej Čertík
2024-03-14
gguf-py : bump version to 0.8.0 (#6060)
Ondřej Čertík
2024-03-14
llama : support models without vocabulary (#5798)
Michael Podvitskiy
2024-03-14
gguf-py : fix dtype check (#6045)
Georgi Gerganov
2024-03-14
gguf-py : add support for I8, I16 and I32 (#6045)
Ondřej Čertík
2024-03-08
llama : support Mamba Selective State Space Models (#5328)
compilade
2024-03-03
gguf-dump : support i-quants (#5841)
Nindaleth
2024-03-02
convert-hf : make model class definitions self-contained (#5825)
Jared Van Bortel
2024-03-01
llama : add StarCoder2 support (#5795)
Sourab Mangrulkar
2024-02-21
llama : add `gemma` model (#5631)
postmasters
2024-02-15
Use correct type of pooling for embedding models (#5500)
Douglas Hanley
2024-02-15
fix(gguf-py): special tokens are no longer skipped when add_<token>_token is ...
Michaël de Vries
2024-02-13
gguf : add python reader example (#5216)
John
2024-02-13
llama : add support for Nomic Embed (#5468)
Jared Van Bortel
2024-02-13
llama : support batched embeddings (#5466)
Douglas Hanley
2024-02-11
Add support for BERT embedding models (#5423)
Douglas Hanley
2024-02-07
llama : add MiniCPM support (#5346)
runfuture
2024-02-01
llama : support InternLM2 (#5184)
Guoteng
2024-01-28
llama : add support for Orion-14B (#5118)
sharpHL
2024-01-26
gguf : fix "general.alignment" type in gguf_reader.py (#5136)
Riceball LEE
2024-01-19
llama : support upcoming Qwen2 (#5037)
Shijie
2024-01-19
llama : add CodeShell support (#5016)
chiranko
2024-01-13
convert : update phi-2 to latest HF repo (#4903)
Georgi Gerganov
2024-01-12
llama : fix llm_build_k_shift to use correct n_rot (#4889)
Georgi Gerganov
2024-01-02
llama : differentiate the KV dims in the attention (#4657)
postmasters
2023-12-28
gpt2 : Add gpt2 architecture integration (#4555)
manikbhandari
2023-12-27
llama : add AWQ for llama, llama2, mpt, and mistral models (#4593)
Nam D. Tran
2023-12-24
llama : add PLaMo model (#3557)
Shintarou Okada
2023-12-21
gguf-py : fix broken link
Georgi Gerganov
2023-12-21
py : open merges file as 'utf-8' (#4566)
howlger
2023-12-18
llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)
Ebey Abraham
2023-12-17
gguf-py : fail fast on nonsensical special token IDs (#4489)
Jared Van Bortel
2023-12-13
llama : add Mixtral support (#4406)
slaren
2023-12-12
english : use `typos` to fix comments and logs (#4354)
Richard Kiss
2023-12-01
llama : add Qwen support (#4281)
Shijie
2023-11-20
ci : add flake8 to github actions (python linting) (#4129)
Galunid
2023-11-19
gguf-py : export chat templates (#4125)
slaren
2023-11-16
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
Kerfuffle
2023-11-14
stablelm : StableLM support (#3586)
Galunid
2023-11-12
gguf-py: gguf_writer: Use bytearray to build metadata (#4051)
Kerfuffle
2023-11-11
Fix gguf-convert-endian script (#4037)
M. Yusuf Sarıgöz
2023-11-11
gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981)
Kerfuffle
2023-11-07
gguf : track writer state, free unneeded tensors, cleanup (#3871)
Jared Van Bortel
2023-11-04
gguf-py: Support 01.AI Yi models (#3943)
Kerfuffle
2023-11-01
llama : implement YaRN RoPE scaling (#2268)
cebtenzzre
2023-10-22
llama : validate special token ids are in range when loading GGUF model (#3635)
Kerfuffle
2023-10-20
gguf : support big endian platform (#3552)
Qin Yue Chen
2023-10-10
llm : add bloom models (#3553)
Xingchen Song(宋星辰)
2023-10-07
gguf.py : fix CI for publishing GGUF package (#3532)
M. Yusuf Sarıgöz
[next]