| Age | Commit message (Expand) | Author |
|---|---|---|
| 2024-06-20 | requirements : Bump torch and numpy for python3.12 (#8041) | Hamdoud Hakem |
| 2024-05-30 | Move convert.py to examples/convert-legacy-llama.py (#7430) | Galunid |
| 2024-05-21 | llama : remove Persimmon (#7408) | Georgi Gerganov |
| 2024-05-12 | remove convert-lora-to-ggml.py (#7204) | slaren |
| 2024-05-08 | convert-hf : save memory with lazy evaluation (#7075) | compilade |
| 2024-05-05 | command-r : add BPE pre-tokenization (#7063) | DAN™ |
| 2024-04-29 | llama : fix BPE pre-tokenization (#6920) | Georgi Gerganov |
| 2024-03-01 | convert-hf-to-gguf : require einops for InternLM2ForCausalLM (#5792) | nold |
| 2023-12-29 | python : add check-requirements.sh and GitHub workflow (#4585) | crasm |
