Runs a model's tokenizer on input content and returns the token count.
HTTP request
POST https://generativelanguage.googleapis.com/v1beta/{model=models/*}:countTokens
Path parameters
Parameters | |
---|---|
model |
Required. The model's resource name. This serves as an ID for the Model to use. This name should match a model name returned by the Format: |
Request body
The request body contains data with the following structure:
JSON representation |
---|
{ "contents": [ { object ( |
Fields | |
---|---|
contents[] |
Optional. The input given to the model as a prompt. This field is ignored when |
generateContentRequest |
Optional. The overall input given to the model. models.countTokens will count prompt, function calling, etc. |
Response body
A response from models.countTokens
.
It returns the model's tokenCount
for the prompt
.
If successful, the response body contains data with the following structure:
JSON representation |
---|
{ "totalTokens": integer } |
Fields | |
---|---|
totalTokens |
The number of tokens that the Always non-negative. When cachedContent is set, this is still the total effective prompt size. I.e. this includes the number of tokens in the cached content. |
Authorization scopes
Requires one of the following OAuth scopes:
https://www.googleapis.com/auth/generative-language
https://www.googleapis.com/auth/generative-language.tuning
https://www.googleapis.com/auth/generative-language.tuning.readonly
https://www.googleapis.com/auth/generative-language.retriever
https://www.googleapis.com/auth/generative-language.retriever.readonly
For more information, see the Authentication Overview.
GenerateContentRequest
Request to generate a completion from the model.
JSON representation |
---|
{ "model": string, "contents": [ { object ( |
Fields | |
---|---|
model |
Required. The name of the Format: |
contents[] |
Required. The content of the current conversation with the model. For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request. |
tools[] |
Optional. A list of A |
toolConfig |
Optional. Tool configuration for any |
safetySettings[] |
Optional. A list of unique This will be enforced on the |
systemInstruction |
Optional. Developer set system instruction. Currently, text only. |
generationConfig |
Optional. Configuration options for model generation and outputs. |
cachedContent |
Optional. The name of the cached content used as context to serve the prediction. Note: only used in explicit caching, where users can have control over caching (e.g. what content to cache) and enjoy guaranteed cost savings. Format: |