ShieldGemma
ShieldGemma is a set of instruction tuned models for evaluating the safety of text and images against a set of defined safety policies. You can use this model as part of a larger implementation of a generative AI application to help evaluate and prevent generative AI applications from violating safety policies. The ShieldGemma family of models is provided with open weights to allow you to fine-tune it for your specific use case.
ShieldGemma 2 is a 4B parameter model built to label images for safety.
ShieldGemma 1 is built on Gemma 2 in 2B, 9B, and 27B parameter sizes.-
Content safety evaluation
Evaluate the safety of prompt input and output responses against a set of defined safety policies. -
Tuneable, open models
ShieldGemma models are provided with open weights and can be fine-tuned for your specific use case.