ShieldGemma is a set of ready-made, instruction-tuned, open weights safety classifiers built on the Gemma family of models to determine whether text or images violate a safety policy across input and output. ShieldGemma is trained to identify across key harms across different models, see the model cards for more information.
- ShieldGemma 2 for image content moderation: Available in 4B. See the model card for more details.
- ShieldGemma 1 for text content moderation: Available in 2B, 9B, and 27B—allowing you to balance speed, performance, and generalizability to suit your needs across any deployment. See the model card for more details.
Safeguard your models with ShieldGemma
![]() |
![]() |
![]() |
You can use ShieldGemma models in the following frameworks.
- KerasNLP, with model checkpoints available from Kaggle.
- Hugging Face Transformers, with model checkpoints available from Hugging Face Hub.