summaryrefslogtreecommitdiff
path: root/common/common.cpp
diff options
context:
space:
mode:
authorAndrew Canis <andrew.canis@gmail.com>2024-03-15 16:41:22 -0400
committerGitHub <noreply@github.com>2024-03-15 22:41:22 +0200
commit12247f4c69a173b9482f68aaa174ec37fc909ccf (patch)
tree1c580de91d5d0676e146bb45b9197d88aeb226fd /common/common.cpp
parent4e9a7f7f7fb6acbddd1462909c8d696e38edbfcc (diff)
llama : add Command-R support (#6033)
Information about the Command-R 35B model (128k context) can be found at: https://huggingface.co/CohereForAI/c4ai-command-r-v01 Based on the llama2 model with a few changes: 1) New hyper parameter to scale output logits (logit_scale) 2) Uses LayerNorm instead of RMSNorm 3) Transfomer layers have a single shared LayerNorm that feeds into both the self-attention and FFN layers in parallel. There is no post-attention LayerNorm. 4) No support for Rotary Position Embeddings (RoPE) scaling 5) No biases used Find GGUF files here: https://huggingface.co/andrewcanis/c4ai-command-r-v01-GGUF To convert model to GGUF format yourself: 1) Download Command-R Hugging Face safetensors: git lfs install git clone https://huggingface.co/CohereForAI/c4ai-command-r-v01 2) Run: python3 convert-hf-to-gguf.py --outtype f16 ./c4ai-command-r-v01
Diffstat (limited to 'common/common.cpp')
0 files changed, 0 insertions, 0 deletions