For a detailed guide on counting tokens using the Gemini API, including how images, audio and video are counted, see the Token counting guide and accompanying Cookbook recipe.
Method: models.countTokens
- Endpoint
- Path parameters
- Request body
- Response body
- Authorization scopes
- Example request
- GenerateContentRequest
Runs a model's tokenizer on input Content
and returns the token count. Refer to the tokens guide to learn more about tokens.
Endpoint
post https://generativelanguage.googleapis.com/v1beta/{model=models/*}:countTokensPath parameters
model
string
Required. The model's resource name. This serves as an ID for the Model to use.
This name should match a model name returned by the models.list
method.
Format: models/{model}
It takes the form models/{model}
.
Request body
The request body contains data with the following structure:
Optional. The input given to the model as a prompt. This field is ignored when generateContentRequest
is set.
Optional. The overall input given to the Model
. This includes the prompt as well as other model steering information like system instructions, and/or function declarations for function calling. Model
s/Content
s and generateContentRequest
s are mutually exclusive. You can either send Model
+ Content
s or a generateContentRequest
, but never both.
Example request
Text
Python
Node.js
Go
Shell
Kotlin
Swift
Dart
Java
Chat
Python
Node.js
Go
Shell
Kotlin
Swift
Dart
Java
Inline media
Python
Node.js
Go
Shell
Kotlin
Swift
Dart
Java
Video
Python
Node.js
Go
Shell
Python
Cache
Python
Node.js
Go
System Instruction
Python
Node.js
Go
Kotlin
Swift
Dart
Java
Tools
Python
Node.js
Kotlin
Swift
Dart
Java
Response body
A response from models.countTokens
.
It returns the model's tokenCount
for the prompt
.
If successful, the response body contains data with the following structure:
totalTokens
integer
The number of tokens that the Model
tokenizes the prompt
into. Always non-negative.
cachedContentTokenCount
integer
Number of tokens in the cached part of the prompt (the cached content).
JSON representation |
---|
{ "totalTokens": integer, "cachedContentTokenCount": integer } |
GenerateContentRequest
Request to generate a completion from the model.
model
string
Required. The name of the Model
to use for generating the completion.
Format: name=models/{model}
.
Optional. A list of Tools
the Model
may use to generate the next response.
A Tool
is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the Model
. Supported Tool
s are Function
and codeExecution
. Refer to the Function calling and the Code execution guides to learn more.
Optional. Tool configuration for any Tool
specified in the request. Refer to the Function calling guide for a usage example.
Optional. A list of unique SafetySetting
instances for blocking unsafe content.
This will be enforced on the GenerateContentRequest.contents
and GenerateContentResponse.candidates
. There should not be more than one setting for each SafetyCategory
type. The API will block any contents and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for each SafetyCategory
specified in the safetySettings. If there is no SafetySetting
for a given SafetyCategory
provided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT are supported. Refer to the guide for detailed information on available safety settings. Also refer to the Safety guidance to learn how to incorporate safety considerations in your AI applications.
Optional. Developer set system instruction(s). Currently, text only.
Optional. Configuration options for model generation and outputs.
cachedContent
string
Optional. The name of the content cached to use as context to serve the prediction. Format: cachedContents/{cachedContent}
JSON representation |
---|
{ "model": string, "contents": [ { object ( |