View source on GitHub |
A simple dataclass used to configure the generation parameters of GenerativeModel.generate_content
.
google.generativeai.types.GenerationConfig(
candidate_count: (int | None) = None,
stop_sequences: (Iterable[str] | None) = None,
max_output_tokens: (int | None) = None,
temperature: (float | None) = None,
top_p: (float | None) = None,
top_k: (int | None) = None,
response_mime_type: (str | None) = None,
response_schema: (protos.Schema | Mapping[str, Any] | None) = None
)
Attributes | |
---|---|
candidate_count
|
Number of generated responses to return. |
stop_sequences
|
The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop sequence. The stop sequence will not be included as part of the response. |
max_output_tokens
|
The maximum number of tokens to include in a
candidate.
If unset, this will default to output_token_limit specified in the model's specification. |
temperature
|
Controls the randomness of the output. Note: The
default value varies by model, see the Model.temperature
attribute of the Model returned the genai.get_model
function.
Values can range from [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied and creative, while a value closer to 0.0 will typically result in more straightforward responses from the model. |
top_p
|
Optional. The maximum cumulative probability of tokens to
consider when sampling.
The model uses combined Top-k and nucleus sampling. Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits number of tokens based on the cumulative probability. |
top_k
|
int
Optional. The maximum number of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling. Top-k sampling considers the set of |
response_mime_type
|
Optional. Output response mimetype of the generated candidate text.
Supported mimetype:
|
response_schema
|
Optional. Specifies the format of the JSON requested if response_mime_type is
application/json .
|
Methods
__eq__
__eq__(
other
)
Class Variables | |
---|---|
candidate_count |
None
|
max_output_tokens |
None
|
response_mime_type |
None
|
response_schema |
None
|
stop_sequences |
None
|
temperature |
None
|
top_k |
None
|
top_p |
None
|