[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["没有我需要的信息","missingTheInformationINeed","thumb-down"],["太复杂/步骤太多","tooComplicatedTooManySteps","thumb-down"],["内容需要更新","outOfDate","thumb-down"],["翻译问题","translationIssue","thumb-down"],["示例/代码问题","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-22。"],[],[],null,["# Understand and count tokens\n\nPython JavaScript Go\n\n\u003cbr /\u003e\n\nGemini and other generative AI models process input and output at a granularity\ncalled a *token*.\n\nAbout tokens\n------------\n\nTokens can be single characters like `z` or whole words like `cat`. Long words\nare broken up into several tokens. The set of all tokens used by the model is\ncalled the vocabulary, and the process of splitting text into tokens is called\n*tokenization*.\n\nFor Gemini models, a token is equivalent to about 4 characters.\n100 tokens is equal to about 60-80 English words.\n\nWhen billing is enabled, the [cost of a call to the Gemini API](/pricing) is\ndetermined in part by the number of input and output tokens, so knowing how to\ncount tokens can be helpful."]]