Azure AI Content Safety
Enhance the safety of generative AI applications with advanced guardrails for responsible AI
Content Safety models have been specifically trained and tested in the following languages: English, German, Spanish, Japanese, French, Italian, Portuguese, and Chinese. The service can work in other languages as well, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
Custom categories currently work well in English only. You can use other languages with your own dataset, but the quality might vary.
Prompt shields enhance the security of generative AI systems by defending against prompt injection attacks:
Protected material detection for text identifies and blocks known text content, such as lyrics, articles, recipes, and selected web content, from appearing in AI-generated outputs.
Protected material detection for code detects and prevents the output of known code. It checks for matches against public source code in GitHub repositories. Additionally, the code referencing capability powered by GitHub Copilot enables developers to locate repositories for exploring and discovering relevant code.