Build transparency artifacts

Documentation is a key method of achieving transparency for developers, governments, policy actors, and end users of your product. This can entail releasing detailed technical reports or model, data, and system cards that appropriately make public essential information based on safety and other model evaluations. Transparency artifacts are more than communication vehicles; they also offer guidance for AI researchers, deployers, and downstream developers on the responsible use of the model. The information is helpful for end users of your product as well, who want to understand details about the model.

Some transparency guidelines to consider:

  • Be clear with users when they are engaging with an experimental generative AI technology and highlight the possibility of unexpected model behavior.
  • Offer thorough documentation on how the generative AI service or product works using understandable language. Consider publishing structured transparency artifacts such as model cards. These cards provide the intended use of your model and summarize evaluations that have been performed throughout model development.
  • Show people how they can offer feedback and how they're in control, such as:
    • Providing mechanisms to help users validate fact-based questions
    • Thumbs up and down icons for user feedback
    • Links to report problems and offer support for rapid response to user feedback
    • User control for storing or deleting user activity

Developer Resources

There is no single template for transparency artifacts across the industry, but existing model cards can serve as a starting point to create your own:

References