diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2023-08-23 23:08:04 +0300 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-08-23 23:08:04 +0300 |
commit | cf658adc832badaaa2ca119fe86070e5a830f8f6 (patch) | |
tree | e314db2fb18676067ddbc5cde0cf7f73c417af29 /llama.h | |
parent | a192860cfec89a38d59a943623bf595b1fe4495b (diff) |
llm : add Falcon support (#2717)
* llama : refactor GGUF constants into static maps
* llama : check if model architecture is known
* llama : refactor llama_model_load_internal()
* gguf : add KV constant maps
* llm : read arch-specific KVs
* convert : add dummy scores + types
* falcon : load tensor data (CPU only)
* llama : fix loading progress bar
* llama : add arch member to llama_model
* falcon : CPU inference working
* falcon : support non-40B models
* falcon : minor
* llama : minor updates
ggml-ci
* convert-falcon-hf-to-gguf.py : fix special token mapping
* llama.cpp : llama default UNK token = id 0
* llama.cpp : fix bpe tokenizer
* llama.cpp : fix the fix of bpe tokenizer
* ggml : pass eps to ggml_norm
* metal : implement RoPE (mode = 2) + avoid ggml_repeat
* ggml : ggml_repeat always creates new tensor
* falcon : copy-paste self-attention from LLaMA
* metal : print extra compute pipeline info
* falcon : minor changes (still chasing the Metal problem)
* llama.cpp : fix linefeed token
* metal : fix GELU kernel numerical stability by using precise::tanh
* metal : temporary workaround for the concurrency optimization bug
* falcon : add CUDA offloading (#2739)
* llama : better model naming and size reporting
* llama : prep new tokenizer support
* llama : advanced BPE tokenizer based on ggllm.cpp imlpementation
* llama : remove oboslete comment
ggml-ci
* common : remove obsolete BPE API + disable test-tokenizer-1
* llama : revert BPE special-case in llama_byte_to_token()
* cuda : add TODOs for RoPE NeoX implementation
* llama : default special tokens based on vocab type
* perplexity : add log for start of tokenization
---------
Co-authored-by: klosax <131523366+klosax@users.noreply.github.com>
Co-authored-by: slaren <slarengh@gmail.com>
Diffstat (limited to 'llama.h')
-rw-r--r-- | llama.h | 15 |
1 files changed, 2 insertions, 13 deletions
@@ -247,6 +247,8 @@ extern "C" { LLAMA_API int llama_n_ctx (const struct llama_context * ctx); LLAMA_API int llama_n_embd (const struct llama_context * ctx); + LLAMA_API enum llama_vocab_type llama_vocab_type(const struct llama_context * ctx); + LLAMA_API int llama_model_n_vocab(const struct llama_model * model); LLAMA_API int llama_model_n_ctx (const struct llama_model * model); LLAMA_API int llama_model_n_embd (const struct llama_model * model); @@ -368,13 +370,6 @@ extern "C" { int n_max_tokens, bool add_bos); - LLAMA_API int llama_tokenize_bpe( - struct llama_context * ctx, - const char * text, - llama_token * tokens, - int n_max_tokens, - bool add_bos); - LLAMA_API int llama_tokenize_with_model( const struct llama_model * model, const char * text, @@ -390,12 +385,6 @@ extern "C" { char * buf, int length); - LLAMA_API int llama_token_to_str_bpe( - const struct llama_context * ctx, - llama_token token, - char * buf, - int length); - LLAMA_API int llama_token_to_str_with_model( const struct llama_model * model, llama_token token, |