diff options
author | goerch <jhr.walter@t-online.de> | 2023-09-16 13:41:33 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-09-16 13:41:33 +0200 |
commit | b08e75baea294e366628b898e85c0bd359b58115 (patch) | |
tree | 417a1a8e7589567ceedba88771056aee080c8e70 /llama.h | |
parent | e6616cf0db2b63189fc34d0076f654af9adecdf8 (diff) |
Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 (#3170)
* Fix für #2721
* Reenable tokenizer test for LLaMa
* Add `console.cpp` dependency
* Fix dependency to `common`
* Fixing wrong fix.
* Make console usage platform specific
Work on compiler warnings.
* Adapting makefile
* Remove trailing whitespace
* Adapting the other parts of the makefile
* Fix typo.
* Fixing the last deviations from sentencepiece indicated by test-tokenizer-1
* Simplify logic
* Add missing change...
* Fix ugly compiler warning
* llama_tokenize should accept strings containing NUL now
* Adding huichen's test case
Diffstat (limited to 'llama.h')
-rw-r--r-- | llama.h | 2 |
1 files changed, 2 insertions, 0 deletions
@@ -374,6 +374,7 @@ extern "C" { LLAMA_API int llama_tokenize( struct llama_context * ctx, const char * text, + int text_len, llama_token * tokens, int n_max_tokens, bool add_bos); @@ -381,6 +382,7 @@ extern "C" { LLAMA_API int llama_tokenize_with_model( const struct llama_model * model, const char * text, + int text_len, llama_token * tokens, int n_max_tokens, bool add_bos); |