The Gemini API supports content generation with images, audio, code, tools, and more. For details on each of these features, read on and check out the task-focused sample code, or read the comprehensive guides.
- Text generation
- Vision
- Audio
- Embeddings
- Long context
- Code execution
- JSON Mode
- Function calling
- System instructions
Method: models.generateContent
Generates a model response given an input GenerateContentRequest. Refer to the text generation guide for detailed usage information. Input capabilities differ between models, including tuned models. Refer to the model guide and tuning guide for details.
Endpoint
posthttps: / /generativelanguage.googleapis.com /v1beta /{model=models /*}:generateContent
Path parameters
modelstring
Required. The name of the Model to use for generating the completion.
Format: models/{model}. It takes the form models/{model}.
Request body
The request body contains data with the following structure:
Optional. A list of Tools the Model may use to generate the next response.
A Tool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the Model. Supported Tools are Function and codeExecution. Refer to the Function calling and the Code execution guides to learn more.
Optional. Tool configuration for any Tool specified in the request. Refer to the Function calling guide for a usage example.
Optional. A list of unique SafetySetting instances for blocking unsafe content.
This will be enforced on the GenerateContentRequest.contents and GenerateContentResponse.candidates. There should not be more than one setting for each SafetyCategory type. The API will block any contents and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for each SafetyCategory specified in the safetySettings. If there is no SafetySetting for a given SafetyCategory provided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT, HARM_CATEGORY_CIVIC_INTEGRITY are supported. Refer to the guide for detailed information on available safety settings. Also refer to the Safety guidance to learn how to incorporate safety considerations in your AI applications.
Optional. Developer set system instruction(s). Currently, text only.
Optional. Configuration options for model generation and outputs.
cachedContentstring
Optional. The name of the content cached to use as context to serve the prediction. Format: cachedContents/{cachedContent}
Example request
Text
Python
Node.js
Go
Shell
Java
Image
Python
Node.js
Go
Shell
Java
Audio
Python
Node.js
Go
Shell
Video
Python
Node.js
Go
Shell
Python
Go
Shell
Chat
Python
Node.js
Go
Shell
Java
Cache
Python
Node.js
Go
Tuned Model
Python
JSON Mode
Python
Node.js
Go
Shell
Java
Code execution
Python
Go
Java
Function Calling
Python
Go
Node.js
Shell
Java
Generation config
Python
Node.js
Go
Shell
Java
Safety Settings
Python
Node.js
Go
Shell
Java
System Instruction
Python
Node.js
Go
Shell
Java
Response body
If successful, the response body contains an instance of GenerateContentResponse.
Method: models.streamGenerateContent
Generates a streamed response from the model given an input GenerateContentRequest.
Endpoint
posthttps: / /generativelanguage.googleapis.com /v1beta /{model=models /*}:streamGenerateContent
Path parameters
modelstring
Required. The name of the Model to use for generating the completion.
Format: models/{model}. It takes the form models/{model}.
Request body
The request body contains data with the following structure:
Optional. A list of Tools the Model may use to generate the next response.
A Tool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the Model. Supported Tools are Function and codeExecution. Refer to the Function calling and the Code execution guides to learn more.
Optional. Tool configuration for any Tool specified in the request. Refer to the Function calling guide for a usage example.
Optional. A list of unique SafetySetting instances for blocking unsafe content.
This will be enforced on the GenerateContentRequest.contents and GenerateContentResponse.candidates. There should not be more than one setting for each SafetyCategory type. The API will block any contents and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for each SafetyCategory specified in the safetySettings. If there is no SafetySetting for a given SafetyCategory provided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT, HARM_CATEGORY_CIVIC_INTEGRITY are supported. Refer to the guide for detailed information on available safety settings. Also refer to the Safety guidance to learn how to incorporate safety considerations in your AI applications.
Optional. Developer set system instruction(s). Currently, text only.
Optional. Configuration options for model generation and outputs.
cachedContentstring
Optional. The name of the content cached to use as context to serve the prediction. Format: cachedContents/{cachedContent}
Example request
Text
Python
Node.js
Go
Shell
Java
Image
Python
Node.js
Go
Shell
Java
Audio
Python
Go
Shell
Video
Python
Node.js
Go
Shell
Python
Go
Shell
Chat
Python
Node.js
Go
Shell
Response body
If successful, the response body contains a stream of GenerateContentResponse instances.
GenerateContentResponse
Response from the model supporting multiple candidate responses.
Safety ratings and content filtering are reported for both prompt in GenerateContentResponse.prompt_feedback and for each candidate in finishReason and in safetyRatings. The API: - Returns either all requested candidates or none of them - Returns no candidates at all only if there was something wrong with the prompt (check promptFeedback) - Reports feedback on each candidate in finishReason and safetyRatings.
Candidate responses from the model.
Returns the prompt's feedback related to the content filters.
Output only. Metadata on the generation requests' token usage.
modelVersionstring
Output only. The model version used to generate the response.
responseIdstring
Output only. responseId is used to identify each response.
| JSON representation |
|---|
{ "candidates": [ { object ( |
PromptFeedback
A set of the feedback metadata the prompt specified in GenerateContentRequest.content.
Optional. If set, the prompt was blocked and no candidates are returned. Rephrase the prompt.
Ratings for safety of the prompt. There is at most one rating per category.
| JSON representation |
|---|
{ "blockReason": enum ( |
BlockReason
Specifies the reason why the prompt was blocked.
| Enums | |
|---|---|
BLOCK_REASON_UNSPECIFIED |
Default value. This value is unused. |
SAFETY |
Prompt was blocked due to safety reasons. Inspect safetyRatings to understand which safety category blocked it. |
OTHER |
Prompt was blocked due to unknown reasons. |
BLOCKLIST |
Prompt was blocked due to the terms which are included from the terminology blocklist. |
PROHIBITED_CONTENT |
Prompt was blocked due to prohibited content. |
IMAGE_SAFETY |
Candidates blocked due to unsafe image generation content. |
UsageMetadata
Metadata on the generation request's token usage.
promptTokenCountinteger
Number of tokens in the prompt. When cachedContent is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.
cachedContentTokenCountinteger
Number of tokens in the cached part of the prompt (the cached content)
candidatesTokenCountinteger
Total number of tokens across all the generated response candidates.
toolUsePromptTokenCountinteger
Output only. Number of tokens present in tool-use prompt(s).
thoughtsTokenCountinteger
Output only. Number of tokens of thoughts for thinking models.
totalTokenCountinteger
Total token count for the generation request (prompt + response candidates).
Output only. List of modalities that were processed in the request input.
Output only. List of modalities of the cached content in the request input.
Output only. List of modalities that were returned in the response.
Output only. List of modalities that were processed for tool-use request inputs.
| JSON representation |
|---|
{ "promptTokenCount": integer, "cachedContentTokenCount": integer, "candidatesTokenCount": integer, "toolUsePromptTokenCount": integer, "thoughtsTokenCount": integer, "totalTokenCount": integer, "promptTokensDetails": [ { object ( |
Candidate
- JSON representation
- FinishReason
- GroundingAttribution
- AttributionSourceId
- GroundingPassageId
- SemanticRetrieverChunk
- GroundingMetadata
- SearchEntryPoint
- GroundingChunk
- Web
- RetrievedContext
- Maps
- PlaceAnswerSources
- ReviewSnippet
- GroundingSupport
- Segment
- RetrievalMetadata
- LogprobsResult
- TopCandidates
- Candidate
- UrlContextMetadata
- UrlMetadata
- UrlRetrievalStatus
A response candidate generated from the model.
Output only. Generated content returned from the model.
Optional. Output only. The reason why the model stopped generating tokens.
If empty, the model has not stopped generating tokens.
List of ratings for the safety of a response candidate.
There is at most one rating per category.
Output only. Citation information for model-generated candidate.
This field may be populated with recitation information for any text included in the content. These are passages that are "recited" from copyrighted material in the foundational LLM's training data.
tokenCountinteger
Output only. Token count for this candidate.
Output only. Attribution information for sources that contributed to a grounded answer.
This field is populated for GenerateAnswer calls.
Output only. Grounding metadata for the candidate.
This field is populated for GenerateContent calls.
avgLogprobsnumber
Output only. Average log probability score of the candidate.
Output only. Log-likelihood scores for the response tokens and top tokens
Output only. Metadata related to url context retrieval tool.
indexinteger
Output only. Index of the candidate in the list of response candidates.
finishMessagestring
Optional. Output only. Details the reason why the model stopped generating tokens. This is populated only when finishReason is set.
| JSON representation |
|---|
{ "content": { object ( |
FinishReason
Defines the reason why the model stopped generating tokens.
| Enums | |
|---|---|
FINISH_REASON_UNSPECIFIED |
Default value. This value is unused. |
STOP |
Natural stop point of the model or provided stop sequence. |
MAX_TOKENS |
The maximum number of tokens as specified in the request was reached. |
SAFETY |
The response candidate content was flagged for safety reasons. |
RECITATION |
The response candidate content was flagged for recitation reasons. |
LANGUAGE |
The response candidate content was flagged for using an unsupported language. |
OTHER |
Unknown reason. |
BLOCKLIST |
Token generation stopped because the content contains forbidden terms. |
PROHIBITED_CONTENT |
Token generation stopped for potentially containing prohibited content. |
SPII |
Token generation stopped because the content potentially contains Sensitive Personally Identifiable Information (SPII). |
MALFORMED_FUNCTION_CALL |
The function call generated by the model is invalid. |
IMAGE_SAFETY |
Token generation stopped because generated images contain safety violations. |
IMAGE_PROHIBITED_CONTENT |
Image generation stopped because generated images has other prohibited content. |
IMAGE_OTHER |
Image generation stopped because of other miscellaneous issue. |
NO_IMAGE |
The model was expected to generate an image, but none was generated. |
IMAGE_RECITATION |
Image generation stopped due to recitation. |
UNEXPECTED_TOOL_CALL |
Model generated a tool call but no tools were enabled in the request. |
TOO_MANY_TOOL_CALLS |
Model called too many tools consecutively, thus the system exited execution. |
GroundingAttribution
Attribution for a source that contributed to an answer.
Output only. Identifier for the source contributing to this attribution.
Grounding source content that makes up this attribution.
| JSON representation |
|---|
{ "sourceId": { object ( |
AttributionSourceId
Identifier for the source contributing to this attribution.
sourceUnion type
source can be only one of the following:Identifier for an inline passage.
Identifier for a Chunk fetched via Semantic Retriever.
| JSON representation |
|---|
{ // source "groundingPassage": { object ( |
GroundingPassageId
Identifier for a part within a GroundingPassage.
passageIdstring
Output only. ID of the passage matching the GenerateAnswerRequest's GroundingPassage.id.
partIndexinteger
Output only. Index of the part within the GenerateAnswerRequest's GroundingPassage.content.
| JSON representation |
|---|
{ "passageId": string, "partIndex": integer } |
SemanticRetrieverChunk
Identifier for a Chunk retrieved via Semantic Retriever specified in the GenerateAnswerRequest using SemanticRetrieverConfig.
sourcestring
Output only. Name of the source matching the request's SemanticRetrieverConfig.source. Example: corpora/123 or corpora/123/documents/abc
chunkstring
Output only. Name of the Chunk containing the attributed text. Example: corpora/123/documents/abc/chunks/xyz
| JSON representation |
|---|
{ "source": string, "chunk": string } |
GroundingMetadata
Metadata returned to client when grounding is enabled.
List of supporting references retrieved from specified grounding source.
List of grounding support.
webSearchQueries[]string
Web search queries for the following-up web search.
Optional. Google search entry for the following-up web searches.
Metadata related to retrieval in the grounding flow.
googleMapsWidgetContextTokenstring
Optional. Resource name of the Google Maps widget context token that can be used with the PlacesContextElement widget in order to render contextual data. Only populated in the case that grounding with Google Maps is enabled.
| JSON representation |
|---|
{ "groundingChunks": [ { object ( |
SearchEntryPoint
Google search entry point.
renderedContentstring
Optional. Web content snippet that can be embedded in a web page or an app webview.
Optional. Base64 encoded JSON representing array of <search term, search url> tuple.
A base64-encoded string.
| JSON representation |
|---|
{ "renderedContent": string, "sdkBlob": string } |
GroundingChunk
Grounding chunk.
chunk_typeUnion type
chunk_type can be only one of the following:Grounding chunk from the web.
Optional. Grounding chunk from context retrieved by the file search tool.
Optional. Grounding chunk from Google Maps.
| JSON representation |
|---|
{ // chunk_type "web": { object ( |
Web
Chunk from the web.
uristring
URI reference of the chunk.
titlestring
Title of the chunk.
| JSON representation |
|---|
{ "uri": string, "title": string } |
RetrievedContext
Chunk from context retrieved by the file search tool.
uristring
Optional. URI reference of the semantic retrieval document.
titlestring
Optional. Title of the document.
textstring
Optional. Text of the chunk.
| JSON representation |
|---|
{ "uri": string, "title": string, "text": string } |
Maps
A grounding chunk from Google Maps. A Maps chunk corresponds to a single place.
uristring
URI reference of the place.
titlestring
Title of the place.
textstring
Text description of the place answer.
placeIdstring
This ID of the place, in places/{placeId} format. A user can use this ID to look up that place.
Sources that provide answers about the features of a given place in Google Maps.
| JSON representation |
|---|
{
"uri": string,
"title": string,
"text": string,
"placeId": string,
"placeAnswerSources": {
object ( |
PlaceAnswerSources
Collection of sources that provide answers about the features of a given place in Google Maps. Each PlaceAnswerSources message corresponds to a specific place in Google Maps. The Google Maps tool used these sources in order to answer questions about features of the place (e.g: "does Bar Foo have Wifi" or "is Foo Bar wheelchair accessible?"). Currently we only support review snippets as sources.
Snippets of reviews that are used to generate answers about the features of a given place in Google Maps.
| JSON representation |
|---|
{
"reviewSnippets": [
{
object ( |
ReviewSnippet
Encapsulates a snippet of a user review that answers a question about the features of a specific place in Google Maps.
reviewIdstring
The ID of the review snippet.
googleMapsUristring
A link that corresponds to the user review on Google Maps.
titlestring
Title of the review.
| JSON representation |
|---|
{ "reviewId": string, "googleMapsUri": string, "title": string } |
GroundingSupport
Grounding support.
groundingChunkIndices[]integer
A list of indices (into 'grounding_chunk') specifying the citations associated with the claim. For instance [1,3,4] means that grounding_chunk[1], grounding_chunk[3], grounding_chunk[4] are the retrieved content attributed to the claim.
confidenceScores[]number
Confidence score of the support references. Ranges from 0 to 1. 1 is the most confident. This list must have the same size as the groundingChunkIndices.
Segment of the content this support belongs to.
| JSON representation |
|---|
{
"groundingChunkIndices": [
integer
],
"confidenceScores": [
number
],
"segment": {
object ( |
Segment
Segment of the content.
partIndexinteger
Output only. The index of a Part object within its parent Content object.
startIndexinteger
Output only. Start index in the given Part, measured in bytes. Offset from the start of the Part, inclusive, starting at zero.
endIndexinteger
Output only. End index in the given Part, measured in bytes. Offset from the start of the Part, exclusive, starting at zero.
textstring
Output only. The text corresponding to the segment from the response.
| JSON representation |
|---|
{ "partIndex": integer, "startIndex": integer, "endIndex": integer, "text": string } |
RetrievalMetadata
Metadata related to retrieval in the grounding flow.
googleSearchDynamicRetrievalScorenumber
Optional. Score indicating how likely information from google search could help answer the prompt. The score is in the range [0, 1], where 0 is the least likely and 1 is the most likely. This score is only populated when google search grounding and dynamic retrieval is enabled. It will be compared to the threshold to determine whether to trigger google search.
| JSON representation |
|---|
{ "googleSearchDynamicRetrievalScore": number } |
LogprobsResult
Logprobs Result
Length = total number of decoding steps.
Length = total number of decoding steps. The chosen candidates may or may not be in topCandidates.
logProbabilitySumnumber
Sum of log probabilities for all tokens.
| JSON representation |
|---|
{ "topCandidates": [ { object ( |
TopCandidates
Candidates with top log probabilities at each decoding step.
Sorted by log probability in descending order.
| JSON representation |
|---|
{
"candidates": [
{
object ( |
Candidate
Candidate for the logprobs token and score.
tokenstring
The candidate’s token string value.
tokenIdinteger
The candidate’s token id value.
logProbabilitynumber
The candidate's log probability.
| JSON representation |
|---|
{ "token": string, "tokenId": integer, "logProbability": number } |
UrlContextMetadata
Metadata related to url context retrieval tool.
List of url context.
| JSON representation |
|---|
{
"urlMetadata": [
{
object ( |
UrlMetadata
Context of the a single url retrieval.
retrievedUrlstring
Retrieved url by the tool.
Status of the url retrieval.
| JSON representation |
|---|
{
"retrievedUrl": string,
"urlRetrievalStatus": enum ( |
UrlRetrievalStatus
Status of the url retrieval.
| Enums | |
|---|---|
URL_RETRIEVAL_STATUS_UNSPECIFIED |
Default value. This value is unused. |
URL_RETRIEVAL_STATUS_SUCCESS |
Url retrieval is successful. |
URL_RETRIEVAL_STATUS_ERROR |
Url retrieval is failed due to error. |
URL_RETRIEVAL_STATUS_PAYWALL |
Url retrieval is failed because the content is behind paywall. |
URL_RETRIEVAL_STATUS_UNSAFE |
Url retrieval is failed because the content is unsafe. |
CitationMetadata
A collection of source attributions for a piece of content.
Citations to sources for a specific response.
| JSON representation |
|---|
{
"citationSources": [
{
object ( |
CitationSource
A citation to a source for a portion of a specific response.
startIndexinteger
Optional. Start of segment of the response that is attributed to this source.
Index indicates the start of the segment, measured in bytes.
endIndexinteger
Optional. End of the attributed segment, exclusive.
uristring
Optional. URI that is attributed as a source for a portion of the text.
licensestring
Optional. License for the GitHub project that is attributed as a source for segment.
License info is required for code citations.
| JSON representation |
|---|
{ "startIndex": integer, "endIndex": integer, "uri": string, "license": string } |
GenerationConfig
- JSON representation
- Modality
- SpeechConfig
- VoiceConfig
- PrebuiltVoiceConfig
- MultiSpeakerVoiceConfig
- SpeakerVoiceConfig
- ThinkingConfig
- ImageConfig
- MediaResolution
Configuration options for model generation and outputs. Not all parameters are configurable for every model.
stopSequences[]string
Optional. The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop_sequence. The stop sequence will not be included as part of the response.
responseMimeTypestring
Optional. MIME type of the generated candidate text. Supported MIME types are: text/plain: (default) Text output. application/json: JSON response in the response candidates. text/x.enum: ENUM as a string response in the response candidates. Refer to the docs for a list of all supported text MIME types.
Optional. Output schema of the generated candidate text. Schemas must be a subset of the OpenAPI schema and can be objects, primitives or arrays.
If set, a compatible responseMimeType must also be set. Compatible MIME types: application/json: Schema for JSON response. Refer to the JSON text generation guide for more details.
Optional. Output schema of the generated response. This is an alternative to responseSchema that accepts JSON Schema.
If set, responseSchema must be omitted, but responseMimeType is required.
While the full JSON Schema may be sent, not all features are supported. Specifically, only the following properties are supported:
$id$defs$ref$anchortypeformattitledescriptionenum(for strings and numbers)itemsprefixItemsminItemsmaxItemsminimummaximumanyOfoneOf(interpreted the same asanyOf)propertiesadditionalPropertiesrequired
The non-standard propertyOrdering property may also be set.
Cyclic references are unrolled to a limited degree and, as such, may only be used within non-required properties. (Nullable properties are not sufficient.) If $ref is set on a sub-schema, no other properties, except for than those starting as a $, may be set.
Optional. An internal detail. Use responseJsonSchema rather than this field.
Optional. The requested modalities of the response. Represents the set of modalities that the model can return, and should be expected in the response. This is an exact match to the modalities of the response.
A model may have multiple combinations of supported modalities. If the requested modalities do not match any of the supported combinations, an error will be returned.
An empty list is equivalent to requesting only text.
candidateCountinteger
Optional. Number of generated responses to return. If unset, this will default to 1. Please note that this doesn't work for previous generation models (Gemini 1.0 family)
maxOutputTokensinteger
Optional. The maximum number of tokens to include in a response candidate.
Note: The default value varies by model, see the Model.output_token_limit attribute of the Model returned from the getModel function.
temperaturenumber
Optional. Controls the randomness of the output.
Note: The default value varies by model, see the Model.temperature attribute of the Model returned from the getModel function.
Values can range from [0.0, 2.0].
topPnumber
Optional. The maximum cumulative probability of tokens to consider when sampling.
The model uses combined Top-k and Top-p (nucleus) sampling.
Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits the number of tokens based on the cumulative probability.
Note: The default value varies by Model and is specified by theModel.top_p attribute returned from the getModel function. An empty topK attribute indicates that the model doesn't apply top-k sampling and doesn't allow setting topK on requests.
topKinteger
Optional. The maximum number of tokens to consider when sampling.
Gemini models use Top-p (nucleus) sampling or a combination of Top-k and nucleus sampling. Top-k sampling considers the set of topK most probable tokens. Models running with nucleus sampling don't allow topK setting.
Note: The default value varies by Model and is specified by theModel.top_p attribute returned from the getModel function. An empty topK attribute indicates that the model doesn't apply top-k sampling and doesn't allow setting topK on requests.
seedinteger
Optional. Seed used in decoding. If not set, the request uses a randomly generated seed.
presencePenaltynumber
Optional. Presence penalty applied to the next token's logprobs if the token has already been seen in the response.
This penalty is binary on/off and not dependant on the number of times the token is used (after the first). Use frequencyPenalty for a penalty that increases with each use.
A positive penalty will discourage the use of tokens that have already been used in the response, increasing the vocabulary.
A negative penalty will encourage the use of tokens that have already been used in the response, decreasing the vocabulary.
frequencyPenaltynumber
Optional. Frequency penalty applied to the next token's logprobs, multiplied by the number of times each token has been seen in the respponse so far.
A positive penalty will discourage the use of tokens that have already been used, proportional to the number of times the token has been used: The more a token is used, the more difficult it is for the model to use that token again increasing the vocabulary of responses.
Caution: A negative penalty will encourage the model to reuse tokens proportional to the number of times the token has been used. Small negative values will reduce the vocabulary of a response. Larger negative values will cause the model to start repeating a common token until it hits the maxOutputTokens limit.
responseLogprobsboolean
Optional. If true, export the logprobs results in response.
logprobsinteger
Optional. Only valid if responseLogprobs=True. This sets the number of top logprobs to return at each decoding step in the Candidate.logprobs_result. The number must be in the range of [0, 20].
enableEnhancedCivicAnswersboolean
Optional. Enables enhanced civic answers. It may not be available for all models.
Optional. The speech generation config.
Optional. Config for thinking features. An error will be returned if this field is set for models that don't support thinking.
Optional. Config for image generation. An error will be returned if this field is set for models that don't support these config options.
Optional. If specified, the media resolution specified will be used.
| JSON representation |
|---|
{ "stopSequences": [ string ], "responseMimeType": string, "responseSchema": { object ( |
Modality
Supported modalities of the response.
| Enums | |
|---|---|
MODALITY_UNSPECIFIED |
Default value. |
TEXT |
Indicates the model should return text. |
IMAGE |
Indicates the model should return images. |
AUDIO |
Indicates the model should return audio. |
SpeechConfig
The speech generation config.
The configuration in case of single-voice output.
Optional. The configuration for the multi-speaker setup. It is mutually exclusive with the voiceConfig field.
languageCodestring
Optional. Language code (in BCP 47 format, e.g. "en-US") for speech synthesis.
Valid values are: de-DE, en-AU, en-GB, en-IN, en-US, es-US, fr-FR, hi-IN, pt-BR, ar-XA, es-ES, fr-CA, id-ID, it-IT, ja-JP, tr-TR, vi-VN, bn-IN, gu-IN, kn-IN, ml-IN, mr-IN, ta-IN, te-IN, nl-NL, ko-KR, cmn-CN, pl-PL, ru-RU, and th-TH.
| JSON representation |
|---|
{ "voiceConfig": { object ( |
VoiceConfig
The configuration for the voice to use.
voice_configUnion type
voice_config can be only one of the following:The configuration for the prebuilt voice to use.
| JSON representation |
|---|
{
// voice_config
"prebuiltVoiceConfig": {
object ( |
PrebuiltVoiceConfig
The configuration for the prebuilt speaker to use.
voiceNamestring
The name of the preset voice to use.
| JSON representation |
|---|
{ "voiceName": string } |
MultiSpeakerVoiceConfig
The configuration for the multi-speaker setup.
Required. All the enabled speaker voices.
| JSON representation |
|---|
{
"speakerVoiceConfigs": [
{
object ( |
SpeakerVoiceConfig
The configuration for a single speaker in a multi speaker setup.
speakerstring
Required. The name of the speaker to use. Should be the same as in the prompt.
Required. The configuration for the voice to use.
| JSON representation |
|---|
{
"speaker": string,
"voiceConfig": {
object ( |
ThinkingConfig
Config for thinking features.
includeThoughtsboolean
Indicates whether to include thoughts in the response. If true, thoughts are returned only when available.
thinkingBudgetinteger
The number of thoughts tokens that the model should generate.
| JSON representation |
|---|
{ "includeThoughts": boolean, "thinkingBudget": integer } |
ImageConfig
Config for image generation features.
aspectRatiostring
Optional. The aspect ratio of the image to generate. Supported aspect ratios: 1:1, 2:3, 3:2, 3:4, 4:3, 9:16, 16:9, 21:9.
If not specified, the model will choose a default aspect ratio based on any reference images provided.
| JSON representation |
|---|
{ "aspectRatio": string } |
MediaResolution
Media resolution for the input media.
| Enums | |
|---|---|
MEDIA_RESOLUTION_UNSPECIFIED |
Media resolution has not been set. |
MEDIA_RESOLUTION_LOW |
Media resolution set to low (64 tokens). |
MEDIA_RESOLUTION_MEDIUM |
Media resolution set to medium (256 tokens). |
MEDIA_RESOLUTION_HIGH |
Media resolution set to high (zoomed reframing with 256 tokens). |
HarmCategory
The category of a rating.
These categories cover various kinds of harms that developers may wish to adjust.
| Enums | |
|---|---|
HARM_CATEGORY_UNSPECIFIED |
Category is unspecified. |
HARM_CATEGORY_DEROGATORY |
PaLM - Negative or harmful comments targeting identity and/or protected attribute. |
HARM_CATEGORY_TOXICITY |
PaLM - Content that is rude, disrespectful, or profane. |
HARM_CATEGORY_VIOLENCE |
PaLM - Describes scenarios depicting violence against an individual or group, or general descriptions of gore. |
HARM_CATEGORY_SEXUAL |
PaLM - Contains references to sexual acts or other lewd content. |
HARM_CATEGORY_MEDICAL |
PaLM - Promotes unchecked medical advice. |
HARM_CATEGORY_DANGEROUS |
PaLM - Dangerous content that promotes, facilitates, or encourages harmful acts. |
HARM_CATEGORY_HARASSMENT |
Gemini - Harassment content. |
HARM_CATEGORY_HATE_SPEECH |
Gemini - Hate speech and content. |
HARM_CATEGORY_SEXUALLY_EXPLICIT |
Gemini - Sexually explicit content. |
HARM_CATEGORY_DANGEROUS_CONTENT |
Gemini - Dangerous content. |
HARM_CATEGORY_CIVIC_INTEGRITY |
Gemini - Content that may be used to harm civic integrity. DEPRECATED: use enableEnhancedCivicAnswers instead. |
ModalityTokenCount
Represents token counting info for a single modality.
The modality associated with this token count.
tokenCountinteger
Number of tokens.
| JSON representation |
|---|
{
"modality": enum ( |
Modality
Content Part modality
| Enums | |
|---|---|
MODALITY_UNSPECIFIED |
Unspecified modality. |
TEXT |
Plain text. |
IMAGE |
Image. |
VIDEO |
Video. |
AUDIO |
Audio. |
DOCUMENT |
Document, e.g. PDF. |
SafetyRating
Safety rating for a piece of content.
The safety rating contains the category of harm and the harm probability level in that category for a piece of content. Content is classified for safety across a number of harm categories and the probability of the harm classification is included here.
Required. The category for this rating.
Required. The probability of harm for this content.
blockedboolean
Was this content blocked because of this rating?
| JSON representation |
|---|
{ "category": enum ( |
HarmProbability
The probability that a piece of content is harmful.
The classification system gives the probability of the content being unsafe. This does not indicate the severity of harm for a piece of content.
| Enums | |
|---|---|
HARM_PROBABILITY_UNSPECIFIED |
Probability is unspecified. |
NEGLIGIBLE |
Content has a negligible chance of being unsafe. |
LOW |
Content has a low chance of being unsafe. |
MEDIUM |
Content has a medium chance of being unsafe. |
HIGH |
Content has a high chance of being unsafe. |
SafetySetting
Safety setting, affecting the safety-blocking behavior.
Passing a safety setting for a category changes the allowed probability that content is blocked.
Required. The category for this setting.
Required. Controls the probability threshold at which harm is blocked.
| JSON representation |
|---|
{ "category": enum ( |
HarmBlockThreshold
Block at and beyond a specified harm probability.
| Enums | |
|---|---|
HARM_BLOCK_THRESHOLD_UNSPECIFIED |
Threshold is unspecified. |
BLOCK_LOW_AND_ABOVE |
Content with NEGLIGIBLE will be allowed. |
BLOCK_MEDIUM_AND_ABOVE |
Content with NEGLIGIBLE and LOW will be allowed. |
BLOCK_ONLY_HIGH |
Content with NEGLIGIBLE, LOW, and MEDIUM will be allowed. |
BLOCK_NONE |
All content will be allowed. |
OFF |
Turn off the safety filter. |