google.generativeai.protos.CountTokensResponse

A response from CountTokens.

It returns the model's token_count for the prompt.

total_tokens int

The number of tokens that the model tokenizes the prompt into.

Always non-negative. When cached_content is set, this is still the total effective prompt size. I.e. this includes the number of tokens in the cached content.

cached_content_token_count int

Number of tokens in the cached part of the prompt, i.e. in the cached content.