diff options
author | goerch <jhr.walter@t-online.de> | 2023-10-03 09:16:26 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-10-03 09:16:26 +0200 |
commit | ff5a3f0c09dfa0a8e0bf76d1748df5c6dee0e8ff (patch) | |
tree | 356ce471234d1f82db452e6274a951ac0b72cb9f /convert.py | |
parent | 1c84003c08027f5d3a4cb876f51d6b6224a34d0e (diff) |
Work on the BPE tokenizer (#3252)
* Work on the BPE tokenizer
Tokenizer tests work for Falcon-7B
* Try to fix build problem
* Fix debug assertion failure
* Fix MSVC Unicode BOM problem
* Cleanup and an improvement
* Fix compiler warning
* Cleanup
* Test doesn't work over the full range of Unicodes
* Update .gitignore and Makefile
* Another Makefile rule
* Testing Aquila
* Moving byte decoding back to `token_to_piece` ...
... because everyone is using it.
* Guarding some unusable code pathes
* Streamlining code and adding some more assertions
Important change: I'm classifying added tokens as control tokens now for BPE.
* Adding a comment
* Adding another assertion
* Fixed vocabulary guarding assertions
* Fix PR for recent change
* Fix PR for recent change
* Fix for compiler warning
* Fix PR for recent change
* Fix PR for recent change
* Fix PR for recent change
* Fix for compiler warning
* Fixes for more compiler warnings
* Remove unused code
* Fix initialization of static maps
* Add scores and token types back, adapt gptneox
* Update llama.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Update unicode.h
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Update unicode.h
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Ported Starcoder and added some assertions
* Fix coding style
* Apply @jploski 's fix for missing tokens
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Diffstat (limited to 'convert.py')
-rwxr-xr-x | convert.py | 24 |
1 files changed, 5 insertions, 19 deletions
@@ -338,29 +338,15 @@ class BpeVocab: def bpe_tokens(self) -> Iterable[tuple[bytes, float, gguf.TokenType]]: tokenizer = self.bpe_tokenizer from transformers.models.gpt2 import tokenization_gpt2 # type: ignore[import] - byte_encoder = tokenization_gpt2.bytes_to_unicode() - byte_decoder = {v: k for k, v in byte_encoder.items()} - score = 0.0 - for i, item in enumerate(tokenizer): - text: bytes = item.encode("utf-8") - # FIXME: These shouldn't be hardcoded, but it's probably better than the current behavior? - if i <= 258 and text.startswith(b'<') and text.endswith(b'>'): - if i == 0 and text == b'<unk>': - toktype = gguf.TokenType.UNKNOWN - elif i == 1 or i == 2: - toktype = gguf.TokenType.CONTROL - elif i >= 3 and text.startswith(b'<0x'): - toktype = gguf.TokenType.BYTE - else: - toktype = gguf.TokenType.NORMAL - else: - toktype = gguf.TokenType.NORMAL - yield text, score, toktype + reverse_vocab = {id: encoded_tok for encoded_tok, id in tokenizer.items()} + + for i, _ in enumerate(tokenizer): + yield reverse_vocab[i], 0.0, gguf.TokenType.NORMAL def added_tokens(self) -> Iterable[tuple[bytes, float, gguf.TokenType]]: for text in self.added_tokens_list: score = -1000.0 - yield text.encode("utf-8"), score, gguf.TokenType.USER_DEFINED + yield text.encode("utf-8"), score, gguf.TokenType.CONTROL def all_tokens(self) -> Iterable[tuple[bytes, float, gguf.TokenType]]: yield from self.bpe_tokens() |