summaryrefslogtreecommitdiff
path: root/convert_lora_to_gguf.py
diff options
context:
space:
mode:
authorsaood06 <saood05@gmail.com>2025-05-09 09:17:41 -0500
committerGitHub <noreply@github.com>2025-05-09 09:17:41 -0500
commit967a2e1860482397a6a386952d972ac1205474ad (patch)
treee5fce0dba400eaf98ba343a06f007ddad408e5ce /convert_lora_to_gguf.py
parente5a4a3ce78ce96b6822dcd6138a98c4d237ecc9b (diff)
Fix missing rope_freqs with convert_hf_to_gguf (#402)
* lora : fix llama conversion script with ROPE_FREQS * convert : refactor rope_freqs generation This should also fix vocab-only conversion for Phi-3. * convert : adapt MiniCPM3 to separate rope_freqs insertion MiniCPM3's tokenizer is treated as a SentencePiece tokenizer to avoid having to run its custom Python code which mixes tokenization in the same file as tool calls. gguf-py : add long and short RoPE factors to tensor mappings Empty, but the key names are used to populate the mappings. --------- Co-authored-by: Xuan Son Nguyen <son@huggingface.co> Co-authored-by: Francis Couture-Harpin <git@compilade.net>
Diffstat (limited to 'convert_lora_to_gguf.py')
-rwxr-xr-xconvert_lora_to_gguf.py4
1 files changed, 4 insertions, 0 deletions
diff --git a/convert_lora_to_gguf.py b/convert_lora_to_gguf.py
index a88d0d4a..ef088034 100755
--- a/convert_lora_to_gguf.py
+++ b/convert_lora_to_gguf.py
@@ -331,6 +331,10 @@ if __name__ == '__main__':
self.gguf_writer.add_float32(gguf.Keys.Adapter.LORA_ALPHA, self.lora_alpha)
super().set_gguf_parameters()
+ def generate_extra_tensors(self) -> Iterable[tuple[str, Tensor]]:
+ # Never add extra tensors (e.g. rope_freqs) for LoRA adapters
+ return ()
+
def get_tensors(self) -> Iterator[tuple[str, Tensor]]:
tensor_map: dict[str, PartialLoraTensor] = {}