Response from the model supporting multiple candidates.
Note on safety ratings and content filtering. They are reported for both prompt in GenerateContentResponse.prompt_feedback
and for each candidate in finishReason
and in safetyRatings
. The API contract is that: - either all requested candidates are returned or no candidates at all - no candidates are returned only if there was something wrong with the prompt (see promptFeedback
) - feedback on each candidate is reported on finishReason
and safetyRatings
.
JSON representation |
---|
{ "candidates": [ { object ( |
Fields | |
---|---|
candidates[] |
Candidate responses from the model. |
promptFeedback |
Returns the prompt's feedback related to the content filters. |
usageMetadata |
Output only. Metadata on the generation requests' token usage. |
PromptFeedback
A set of the feedback metadata the prompt specified in GenerateContentRequest.content
.
JSON representation |
---|
{ "blockReason": enum ( |
Fields | |
---|---|
blockReason |
Optional. If set, the prompt was blocked and no candidates are returned. Rephrase your prompt. |
safetyRatings[] |
Ratings for safety of the prompt. There is at most one rating per category. |
BlockReason
Specifies what was the reason why prompt was blocked.
Enums | |
---|---|
BLOCK_REASON_UNSPECIFIED |
Default value. This value is unused. |
SAFETY |
Prompt was blocked due to safety reasons. You can inspect safetyRatings to understand which safety category blocked it. |
OTHER |
Prompt was blocked due to unknown reasons. |
UsageMetadata
Metadata on the generation request's token usage.
JSON representation |
---|
{ "promptTokenCount": integer, "cachedContentTokenCount": integer, "candidatesTokenCount": integer, "totalTokenCount": integer } |
Fields | |
---|---|
promptTokenCount |
Number of tokens in the prompt. When cachedContent is set, this is still the total effective prompt size. I.e. this includes the number of tokens in the cached content. |
cachedContentTokenCount |
Number of tokens in the cached part of the prompt, i.e. in the cached content. |
candidatesTokenCount |
Total number of tokens across the generated candidates. |
totalTokenCount |
Total token count for the generation request (prompt + candidates). |