index
:
ik_llama.cpp.git
main
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
convert-hf-to-gguf.py
Age
Commit message (
Expand
)
Author
2024-03-08
llama : support Mamba Selective State Space Models (#5328)
compilade
2024-03-04
flake : fix
Georgi Gerganov
2024-03-03
llama : allow for user specified embedding pooling type (#5849)
Douglas Hanley
2024-03-02
convert-hf : make model class definitions self-contained (#5825)
Jared Van Bortel
2024-03-01
llama : add StarCoder2 support (#5795)
Sourab Mangrulkar
2024-03-01
gemma : fix bfloat16 -> float16 conversion issue (#5810)
kunal-vaishnavi
2024-02-25
py : fix StableLM conversion after config.json changes (#5703)
Anas Ahouzi
2024-02-23
convert : fix missing ftype for gemma (#5690)
Jared Van Bortel
2024-02-22
mpt : do not duplicate token_embd.weight on disk (#5670)
Jared Van Bortel
2024-02-22
py : add Gemma conversion from HF models (#5647)
Georgi Gerganov
2024-02-22
py : minor fixes (#5668)
Georgi Gerganov
2024-02-15
Use correct type of pooling for embedding models (#5500)
Douglas Hanley
2024-02-13
llama : add support for Nomic Embed (#5468)
Jared Van Bortel
2024-02-13
llama : support batched embeddings (#5466)
Douglas Hanley
2024-02-11
Add support for BERT embedding models (#5423)
Douglas Hanley
2024-02-08
llama : fix MiniCPM (#5392)
runfuture
2024-02-07
llama : add MiniCPM support (#5346)
runfuture
2024-02-05
py : fix internlm2-hf convert to gguf (#5305)
Guoteng
2024-02-02
py : add check for '.attn.masked_bias' layers to GPT2model (#5281)
Mirror Azure
2024-02-01
llama : support InternLM2 (#5184)
Guoteng
2024-01-28
llama : add support for Orion-14B (#5118)
sharpHL
2024-01-22
llama : support StableLM 2 1.6B (#5052)
compilade
2024-01-20
convert : partially revert PR #4818 (#5041)
Jared Van Bortel
2024-01-19
llama : support upcoming Qwen2 (#5037)
Shijie
2024-01-19
py : fix flake8 lint
Georgi Gerganov
2024-01-19
llama : add CodeShell support (#5016)
chiranko
2024-01-16
py : remove unnecessary hasattr (#4903)
Georgi Gerganov
2024-01-13
convert : update phi-2 to latest HF repo (#4903)
Georgi Gerganov
2024-01-12
py : fix lint (#4889)
Georgi Gerganov
2024-01-12
llama : fix llm_build_k_shift to use correct n_rot (#4889)
Georgi Gerganov
2024-01-02
py : re-enable mmap in convert hf (#4732)
Nam D. Tran
2023-12-29
python : add check-requirements.sh and GitHub workflow (#4585)
crasm
2023-12-28
gpt2 : Add gpt2 architecture integration (#4555)
manikbhandari
2023-12-27
llama : add AWQ for llama, llama2, mpt, and mistral models (#4593)
Nam D. Tran
2023-12-24
llama : add PLaMo model (#3557)
Shintarou Okada
2023-12-18
llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)
Ebey Abraham
2023-12-13
llama : add Mixtral support (#4406)
slaren
2023-12-01
llama : add Qwen support (#4281)
Shijie
2023-11-25
scripts : Use mmap in torch load (#4202)
Galunid
2023-11-24
convert : fix tensors using grad in some models (#4173)
Galunid
2023-11-20
ci : add flake8 to github actions (python linting) (#4129)
Galunid
2023-11-17
py : Falcon HF compatibility (#4104)
John
2023-11-14
stablelm : StableLM support (#3586)
Galunid
2023-11-09
scripts: Generalize convert scripts (#3838)
Galunid