google.generativeai.protos.CountMessageTokensResponse

A response from CountMessageTokens.

It returns the model's token_count for the prompt.

token_count int

The number of tokens that the model tokenizes the prompt into.

Always non-negative.