diff options
author | compilade <git@compilade.net> | 2024-05-08 18:16:38 -0400 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-05-08 18:16:38 -0400 |
commit | f98eb31c517c95960df1d0abc48002787f145f3b (patch) | |
tree | de51a7b79fa5e6488ed4f76b5d0867d6c23d3c51 /examples | |
parent | bc4bba364fb96d908f2698e908648df5e6f55e02 (diff) |
convert-hf : save memory with lazy evaluation (#7075)
* convert-hf : begin refactoring write_tensor
* convert : upgrade to sentencepiece v0.2.0
* convert-hf : remove unused n_dims in extra_*_tensors
* convert-hf : simplify MoE weights stacking
* convert-hf : flake8 linter doesn't like semicolons
* convert-hf : allow unusual model part names
For example, loading `model-00001-of-00001.safetensors` now works.
* convert-hf : fix stacking MoE expert tensors
`torch.stack` and `torch.cat` don't do the same thing.
* convert-hf : fix Mamba conversion
Tested to work even with a SentencePiece-based tokenizer.
* convert : use a string for the SentencePiece tokenizer path
* convert-hf : display tensor shape
* convert-hf : convert norms to f32 by default
* convert-hf : sort model part names
`os.listdir` is said to list files in arbitrary order.
Sorting the file names should let "model-00009-of-00042.safetensors"
be loaded before "model-00010-of-00042.safetensors".
* convert-hf : use an ABC for Model again
It seems Protocol can't be used as a statically type-checked ABC,
because its subclasses also can't be instantiated. (why did it seem to work?)
At least there's still a way to throw an error when forgetting to define
the `model_arch` property of any registered Model subclasses.
* convert-hf : use a plain class for Model, and forbid direct instantiation
There are no abstract methods used anyway,
so using ABC isn't really necessary.
* convert-hf : more consistent formatting of cmdline args
* convert-hf : align the message logged for converted tensors
* convert-hf : fix Refact conversion
* convert-hf : save memory with lazy evaluation
* convert-hf : flake8 doesn't like lowercase L as a variable name
* convert-hf : remove einops requirement for InternLM2
* convert-hf : faster model parts loading
Instead of pre-loading them all into a dict, iterate on the tensors
in the model parts progressively as needed in Model.write_tensors
Conversion for some architectures relies on checking for the presence
of specific tensor names, so for multi-part models, the weight map is read
from the relevant json file to quickly get these names up-front.
* convert-hf : minor changes for consistency
* gguf-py : add tqdm as a dependency
It's small, and used for a progress bar
in GGUFWriter.write_tensors_to_file
Diffstat (limited to 'examples')
-rw-r--r-- | examples/server/tests/features/steps/steps.py | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/examples/server/tests/features/steps/steps.py b/examples/server/tests/features/steps/steps.py index 0882a5d3..f4b1ac1d 100644 --- a/examples/server/tests/features/steps/steps.py +++ b/examples/server/tests/features/steps/steps.py @@ -939,7 +939,7 @@ async def oai_chat_completions(user_prompt, while event_received: event_received = False async for line_in_bytes in response.content: - line = line_in_bytes.decode('utf8') + line = line_in_bytes.decode('utf-8') line = line.rstrip('\n').rstrip('\r') if line == '': continue |