Age | Commit message (Collapse) | Author | |
---|---|---|---|
2024-05-30 | Move convert.py to examples/convert-legacy-llama.py (#7430) | Galunid | |
* Move convert.py to examples/convert-no-torch.py * Fix CI, scripts, readme files * convert-no-torch -> convert-legacy-llama * Move vocab thing to vocab.py * Fix convert-no-torch -> convert-legacy-llama * Fix lost convert.py in ci/run.sh * Fix imports * Fix gguf not imported correctly * Fix flake8 complaints * Fix check-requirements.sh * Get rid of ADDED_TOKENS_FILE, FAST_TOKENIZER_FILE * Review fixes | |||
2023-09-27 | make-ggml.py : compatibility with more models and GGUF (#3290) | Richard Roberson | |
* Resync my fork with new llama.cpp commits * examples : rename to use dash instead of underscore * New model conversions --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> | |||
2023-08-23 | chmod : make scripts executable (#2675) | Cebtenzzre | |
2023-07-21 | examples : add easy python script to create quantized (k-bit support) GGML ↵ | Richard Roberson | |
models from local HF Transformer models (#2311) * Resync my fork with new llama.cpp commits * examples : rename to use dash instead of underscore --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> |