diff options
Diffstat (limited to 'examples/server/README.md')
-rw-r--r-- | examples/server/README.md | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/examples/server/README.md b/examples/server/README.md index bf371364..a7c3f0b5 100644 --- a/examples/server/README.md +++ b/examples/server/README.md @@ -272,7 +272,7 @@ node index.js `logit_bias`: Modify the likelihood of a token appearing in the generated text completion. For example, use `"logit_bias": [[15043,1.0]]` to increase the likelihood of the token 'Hello', or `"logit_bias": [[15043,-1.0]]` to decrease its likelihood. Setting the value to false, `"logit_bias": [[15043,false]]` ensures that the token `Hello` is never produced. The tokens can also be represented as strings, e.g. `[["Hello, World!",-0.5]]` will reduce the likelihood of all the individual tokens that represent the string `Hello, World!`, just like the `presence_penalty` does. Default: `[]` - `n_probs`: If greater than 0, the response also contains the probabilities of top N tokens for each generated token. Default: `0` + `n_probs`: If greater than 0, the response also contains the probabilities of top N tokens for each generated token given the sampling settings. Note that for temperature < 0 the tokens are sampled greedily but token probabilities are still being calculated via a simple softmax of the logits without considering any other sampler settings. Default: `0` `min_keep`: If greater than 0, force samplers to return N possible tokens at minimum. Default: `0` |