[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["没有我需要的信息","missingTheInformationINeed","thumb-down"],["太复杂/步骤太多","tooComplicatedTooManySteps","thumb-down"],["内容需要更新","outOfDate","thumb-down"],["翻译问题","translationIssue","thumb-down"],["示例/代码问题","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2024-07-18。"],[],[],null,["# Responsible Generative AI Toolkit\n\n\u003cbr /\u003e\n\nThis toolkit provides resources and tools to apply best practices for\nresponsible use of open models, such as [Gemma](/gemma), including:\n\n- Guidance on setting safety policies, safety tuning, safety classifiers and model evaluation.\n- The [Learning Interpretability Tool](/responsible/docs/alignment#lit) for investigating and debugging Gemma's behavior in response to prompts.\n- The [LLM Comparator](/responsible/docs/evaluation#llm-comparator) for running and visualizing comparative evaluation results.\n- A methodology for [building robust safety classifiers](/responsible/docs/safeguards#agile-classifiers) with minimal examples.\n\nThis version of the toolkit focuses on English text-to-text models only. You\ncan provide feedback to make this toolkit more helpful through the feedback\nmechanism link at the bottom of the page.\n\nWhen building with Gemma, you should take a holistic approach to responsibility\nand consider all the possible challenges at the application and model levels.\nThis toolkit covers risk and mitigation techniques to address safety, privacy,\nfairness, and accountability.\n\nCheck out the rest of this toolkit for more information and guidance:\n\n- [Design a responsible approach](/responsible/docs/design)\n- [Align your models](/responsible/docs/alignment)\n- [Evaluate your models and system for safety](/responsible/docs/evaluation)\n- [Protect your system with safeguards](/responsible/docs/safeguards)"]]