Responsible Generative AI Toolkit

This toolkit provides resources to apply best practices for responsible use of open models such as the Gemma models, including:

  • Guidance on setting safety policies, safety tuning, safety classifiers and model evaluation.
  • The Learning Interpretability Tool (LIT) for investigating Gemma's behavior and addressing potential issues.
  • A methodology for building robust safety classifiers with minimal examples.

This version of the toolkit focuses on English text-to-text models only. You can provide feedback to make this toolkit more helpful through the feedback mechanism link at the bottom of the page.

When building with Gemma, you should take a holistic approach to responsibility and consider all the possible challenges at the application and model levels. This toolkit covers risk and mitigation techniques to address safety, privacy, fairness, and accountability.

Functional diagram of responsible AI practices

Check out the rest of this toolkit for more information and guidance:

Authors and contributors

This toolkit builds on research and tools from various teams across Google, including these authors and contributors:

Ludovic Peran, Kathy Meier-Hellstern, Lucas Dixon, Reena Jana, Oscar Wahltinez, Clément Crepy, Ryan Mullins, Ian Tenney, Ted Klimenko, Shree Pandya, Nithum Thain, Mackenzie Thomas, Hayden Schaffer, Bin Du, Seliem El-Sayed, Parker Barnes, Madeleine Elish, Grace Wu, Tris Warkentin, Marie Pellat