Configuration options for model generation and outputs. Not all parameters may be configurable for every model.
JSON representation |
---|
{ "stopSequences": [ string ], "candidateCount": integer, "maxOutputTokens": integer, "temperature": number, "topP": number, "topK": integer } |
Fields | |
---|---|
stopSequences[] |
Optional. The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop sequence. The stop sequence will not be included as part of the response. |
candidateCount |
Optional. Number of generated responses to return. Currently, this value can only be set to 1. If unset, this will default to 1. |
maxOutputTokens |
Optional. The maximum number of tokens to include in a candidate. Note: The default value varies by model, see the |
temperature |
Optional. Controls the randomness of the output. Note: The default value varies by model, see the Values can range from [0.0, 2.0]. |
topP |
Optional. The maximum cumulative probability of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling. Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits number of tokens based on the cumulative probability. Note: The default value varies by model, see the |
topK |
Optional. The maximum number of tokens to consider when sampling. Models use nucleus sampling or combined Top-k and nucleus sampling. Top-k sampling considers the set of Note: The default value varies by model, see the |