index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
convert.py
Age
Commit message (
Expand
)
Author
2024-05-16
convert : get general.name from model dir, not its parent (#5615)
Jared Van Bortel
2024-05-13
convert.py: Outfile default name change and additional metadata support (#4858)
Brian
2024-05-08
convert-hf : save memory with lazy evaluation (#7075)
compilade
2024-05-08
convert.py : --vocab-only generates false but valid params (#7027)
20kdc
2024-05-03
convert.py : add python logging instead of print() (#6511)
Brian
2024-04-21
llama : support Llama 3 HF conversion (#6745)
Pedro Cuenca
2024-04-10
convert.py : add consolidated.safetensors for mixtral 8x22b (#6587)
slaren
2024-04-09
BERT tokenizer fixes (#6498)
Jared Van Bortel
2024-04-08
Comment explaining a decision (#6531)
kunnis
2024-04-03
ggml : mul_mat_id use the same tensor for all the experts (#6387)
slaren
2024-03-28
convert : refactor vocab selection logic (#6355)
Jared Van Bortel
2024-03-18
convert : use f32 outtype for bf16 tensors (#6106)
Romain D
2024-03-14
llama : support models without vocabulary (#5798)
Michael Podvitskiy
2024-03-06
convert : remove AWQ remnants (#5768)
Georgi Gerganov
2024-03-02
convert : automatically fall back to HfVocab if tokenizer.model doesn't exist...
Jared Van Bortel
2024-02-14
llava : support v1.6 (#5267)
John
2024-02-06
convert : fix TypeError on GPT-2 vocab.json (#5288)
Sang-Kil Park
2024-02-06
py : handle byte tokens in `get_token_type` (#5341)
Georgi Gerganov
2024-01-29
py : fix except (#5194)
Georgi Gerganov
2024-01-29
py : improve BPE tokenizer support (#5189)
Sang-Kil Park
2024-01-20
convert : partially revert PR #4818 (#5041)
Jared Van Bortel
2024-01-18
convert.py : fix llama/llama2 conversion due to vocab_size=-1 (#5019)
David Sommers
2024-01-17
py : fix whitespace
Georgi Gerganov
2024-01-17
py : fix missing added_tokens_dict for SPM and BPE vocabs (#4971)
Georgi Gerganov
2024-01-09
convert.py : fix vanilla LLaMA model conversion (#4818)
Austin
2023-12-27
llama : add AWQ for llama, llama2, mpt, and mistral models (#4593)
Nam D. Tran
2023-12-27
Add byte token type when tokenizer.model is not exists (#4641)
wonjun Jang
2023-12-14
convert : support loading vocab from fast tokenizer config (#3633)
wonjun Jang
2023-12-13
llama : add Mixtral support (#4406)
slaren
2023-12-12
english : use `typos` to fix comments and logs (#4354)
Richard Kiss
2023-11-30
convert.py : fix llama/llama2 conversion due to vocab_size=-1 (#4258)
slaren
2023-11-25
Update docs for yarn_ext_factor <0.0 as unspecified instead of NaN (#4189)
crasm
2023-11-20
ci : add flake8 to github actions (python linting) (#4129)
Galunid
2023-11-17
convert : use 'model' value if it exists. This allows karpathy/tinyllamas to ...
Don Mahurin
2023-11-13
convert.py: also look for plain model.safetensors (#4043)
afrideva
2023-11-11
gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981)
Kerfuffle
2023-11-09
scripts: Generalize convert scripts (#3838)
Galunid
2023-11-01
llama : implement YaRN RoPE scaling (#2268)
cebtenzzre
2023-10-28
convert : ignore tokens if their IDs are within [0, vocab_size) (#3831)
Georgi Gerganov
2023-10-22
llama : validate special token ids are in range when loading GGUF model (#3635)
Kerfuffle
2023-10-20
gguf : support big endian platform (#3552)
Qin Yue Chen
2023-10-03
Work on the BPE tokenizer (#3252)
goerch
2023-10-02
gguf : general usability improvements (#3409)
cebtenzzre
2023-09-27
convert : remove bug in convert.py permute function (#3364)
Zhang Peiyuan
2023-09-10
convert: remove most of the n_mult usage in convert.py (#3098)
Erik Scholz
2023-09-07
convert : fix F32 ftype not being saved (#3048)
Cebtenzzre
2023-09-05
convert: fix convert.py not working with int filename_stem (#3028)
Erik Scholz
2023-09-03
convert.py : BPE fixes (#2938)
Kerfuffle
2023-08-31
convert : fix another python 3.8 issue (#2949)
Cebtenzzre
2023-08-31
scripts: Use local gguf package when running from repo (#2927)
Kerfuffle
[next]