google.ai.generativelanguage.CountTokensRequest

Counts the number of tokens in the prompt sent to a model.

Models may tokenize text differently, so each model may return a different token_count.

model str

Required. The model's resource name. This serves as an ID for the Model to use.

This name should match a model name returned by the ListModels method.

Format: models/{model}

contents MutableSequence[google.ai.generativelanguage.Content]

Optional. The input given to the model as a prompt.

generate_content_request google.ai.generativelanguage.GenerateContentRequest

Optional. The overall input given to the model. CountTokens will count prompt, function calling, etc.