Configuration options for model generation and outputs.

Not all parameters may be configurable for every model.

candidate_count int

Optional. Number of generated responses to return. Currently, this value can only be set to 1. If unset, this will default to 1.

stop_sequences MutableSequence[str]

Optional. The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop sequence. The stop sequence will not be included as part of the response.

max_output_tokens int

Optional. The maximum number of tokens to include in a candidate.

temperature float

Optional. Controls the randomness of the output.

Values can range from [0.0, 2.0].

top_p float

Optional. The maximum cumulative probability of tokens to consider when sampling.

The model uses combined Top-k and nucleus sampling.

Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits number of tokens based on the cumulative probability.

top_k int

Optional. The maximum number of tokens to consider when sampling.

Models use nucleus sampling or combined Top-k and nucleus sampling. Top-k sampling considers the set of top_k most probable tokens. Models running with nucleus sampling don't allow top_k setting.

response_mime_type str

Optional. Output response mimetype of the generated candidate text. Supported mimetype: text/plain: (default) Text output. application/json: JSON response in the candidates.


Optional. Output response schema of the generated candidate text when response mime type can have schema. Schema can be objects, primitives or arrays and is a subset of OpenAPI schema <>__.

If set, a compatible response_mime_type must also be set. Compatible mimetypes: application/json: Schema for JSON response.