google.generativeai.protos.GenerateContentResponse.UsageMetadata

Metadata on the generation request's token usage.

prompt_token_count int

Number of tokens in the prompt. When cached_content is set, this is still the total effective prompt size. I.e. this includes the number of tokens in the cached content.

cached_content_token_count int

Number of tokens in the cached part of the prompt, i.e. in the cached content.

candidates_token_count int

Total number of tokens across the generated candidates.

total_token_count int

Total token count for the generation request (prompt + candidates).