Counts the number of tokens in the prompt sent to a model.

Models may tokenize text differently, so each model may return a different token_count.

model str

Required. The model's resource name. This serves as an ID for the Model to use.

This name should match a model name returned by the ListModels method.

Format: models/{model}

contents MutableSequence[]

Optional. The input given to the model as a prompt. This field is ignored when generate_content_request is set.


Optional. The overall input given to the model. CountTokens will count prompt, function calling, etc.