Trace Id is missing
Skip to main content
Azure

Azure AI Content Safety

Enhance the safety of generative AI applications with advanced guardrails for responsible AI
Overview

Build robust guardrails for generative AI

  • Detect and block violence, hate, sexual, and self-harm content. Configure severity thresholds for your specific use case, and adhere to your responsible AI policies.
  • Create unique content filters tailored to your requirements using custom categories. Quickly train a new custom category by providing examples of content you need to block.
  • Safeguard your AI applications against prompt injection attacks and jailbreak attempts. Identify and mitigate both direct and indirect threats with prompt shields.
  • Identify and correct generative AI hallucinations and ensure outputs are reliable, accurate, and grounded in data with groundedness detection.
  • Pinpoint copyrighted content and provide sources for preexisting text and code with protected material detection.
Video

Develop AI apps with built-in safety

Detect and mitigate harmful content in user-generated and AI-generated inputs and outputs—including text, images, and mixed media—all with Azure AI Content Safety.
Use cases

Safeguard your AI applications

Security

Built-in security and compliance 

Microsoft has committed to investing $20 billion in cybersecurity over five years.
We employ more than 8,500 security and threat intelligence experts across 77 countries.
Azure has one of the largest compliance certification portfolios in the industry.
Pricing

Flexible pricing to meet your needs

Pay for only what you use—no upfront costs. Azure AI Content Safety pay-as-you-go pricing is based on:
Customer stories

See how customers are protecting their applications with Azure AI Content Safety

FAQ

Frequently asked questions

  • Content Safety models have been specifically trained and tested in the following languages: English, German, Spanish, Japanese, French, Italian, Portuguese, and Chinese. The service can work in other languages as well, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
    Custom categories currently work well in English only. You can use other languages with your own dataset, but the quality might vary.
  • Some Azure AI Content Safety features are only available in certain regions. See the features available in each region.
  • The system monitors across four harm categories: hate, sexual, violence, and self-harm.
  • Yes, you can adjust severity thresholds for each harm category filter.
  • Yes, you can use the Azure AI Content Safety custom categories API to create your own content filters. By providing examples, you can train the filter to detect and block undesired content specific to your defined custom categories.
  • Prompt shields enhance the security of generative AI systems by defending against prompt injection attacks:
     
    • Direct prompt attacks (jailbreaks): Users try to manipulate the AI system and bypass safety protocols by creating prompts that attempt to alter system rules or trick the model into executing restricted actions.
    • Indirect attacks: Third-party content, like documents or emails, contains hidden instructions to exploit the AI system, such as embedded commands an AI might unknowingly execute.
  • Groundedness detection identifies and corrects the ungrounded outputs of generative AI models, ensuring they’re based on provided source materials. This helps to prevent the generation of fabricated or false information. Using a custom language model, groundedness detection evaluates claims against source data and mitigates AI hallucinations.
  • Protected material detection for text identifies and blocks known text content, such as lyrics, articles, recipes, and selected web content, from appearing in AI-generated outputs.
    Protected material detection for code detects and prevents the output of known code. It checks for matches against public source code in GitHub repositories. Additionally, the code referencing capability powered by GitHub Copilot enables developers to locate repositories for exploring and discovering relevant code.
  • The content filtering system inside Azure OpenAI Service is powered by Azure AI Content Safety. It’s designed to detect and prevent the output of harmful content in both input prompts and output completions. It works alongside core models, including GPT and DALL-E.
A woman sitting at a table using a laptop.
Next steps

Choose the Azure account that’s right for you

Pay as you go or try Azure free for up to 30 days.
A woman with short curly hair smiling in a green shirt.
Azure Solutions

Azure cloud solutions

Solve your business problems with proven combinations of Azure cloud services, as well as sample architectures and documentation.
A man in a white shirt using a laptop.
Business Solution Hub

Find the right Microsoft Cloud solution

Browse the Microsoft Business Solutions Hub to find the products and solutions that can help your organization reach its goals.
AI-powered assistant