Troubleshooting guide

Use this guide to help you diagnose and resolve common issues that arise when you call the Gemini API. If you encounter API key issues, ensure you have set up your API key correctly per the API key setup guide.

Check your API calls for model parameter errors

Ensure your model parameters are within the following values:

Model parameter Values (range)
Candidate count 1-8 (integer)
Temperature 0.0-1.0
Max output tokens Use get_model (Python) to determine the maximum number of tokens for the model you are using.
TopP 0.0-1.0

Check if you have the right model

Ensure you are using a supported model. Use list_models (Python) to get all models available for use.

Safety issues

If you see a prompt was blocked because of a safety setting in your API call, review the prompt with respect to the filters you set in the API call.

If you see BlockedReason.OTHER, the query or response may violate the terms of service or be otherwise unsupported.

Improve model output

For higher quality model outputs, explore writing more structured prompts. The introduction to prompt design page introduces some basic concepts, strategies, and best practices to get you started.

If you have hundreds of examples of good input/output pairs, you can also consider model tuning.

Understand token limits

Use the ModelService API to get additional metadata about the models, including input and output token limits.

To get the tokens used by your prompt, use countMessageTokens for chat models and countTextTokens for text models.

Known issues

  • Mobile support for Google AI Studio: While you can open the website on mobile, it has not been optimized for small screens.
  • The API supports only English. Submitting prompts in different languages can produce unexpected or even blocked responses. See available languages for updates.

File a bug

File an issue in Github to ask questions or submit feature requests or bugs.