Go Back

Azure AI Content Safety

azure.microsoft.com
Summary

Azure AI Content Safety is a Microsoft Azure service designed to detect and filter harmful or inappropriate content in text and images, primarily for enhancing the safety and responsible use of generative AI applications. It provides capabilities to block violence, hate, sexual, and self-harm content, create custom content filters, and mitigate security threats like prompt injection attacks.

Features
0/15
See all

No common features found

Rationale

Azure AI Content Safety is a service focused on detecting and moderating harmful or inappropriate content in text and images, primarily for enhancing the safety of generative AI applications. It does not offer features related to file organization, chat-based file interaction, semantic search for general files, automated sorting, or cloud storage integration for file management. Its purpose is content moderation, not file management.