azure-ai-contentsafety-py

Azure AI Content Safety SDK for Python

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "azure-ai-contentsafety-py" with this command: npx skills add claudedjale/skillset/claudedjale-skillset-azure-ai-contentsafety-py

Azure AI Content Safety SDK for Python

Detect harmful user-generated and AI-generated content in applications.

Installation

pip install azure-ai-contentsafety

Environment Variables

CONTENT_SAFETY_ENDPOINT=https://<resource>.cognitiveservices.azure.com CONTENT_SAFETY_KEY=<your-api-key>

Authentication

API Key

from azure.ai.contentsafety import ContentSafetyClient from azure.core.credentials import AzureKeyCredential import os

client = ContentSafetyClient( endpoint=os.environ["CONTENT_SAFETY_ENDPOINT"], credential=AzureKeyCredential(os.environ["CONTENT_SAFETY_KEY"]) )

Entra ID

from azure.ai.contentsafety import ContentSafetyClient from azure.identity import DefaultAzureCredential

client = ContentSafetyClient( endpoint=os.environ["CONTENT_SAFETY_ENDPOINT"], credential=DefaultAzureCredential() )

Analyze Text

from azure.ai.contentsafety import ContentSafetyClient from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory from azure.core.credentials import AzureKeyCredential

client = ContentSafetyClient(endpoint, AzureKeyCredential(key))

request = AnalyzeTextOptions(text="Your text content to analyze") response = client.analyze_text(request)

Check each category

for category in [TextCategory.HATE, TextCategory.SELF_HARM, TextCategory.SEXUAL, TextCategory.VIOLENCE]: result = next((r for r in response.categories_analysis if r.category == category), None) if result: print(f"{category}: severity {result.severity}")

Analyze Image

from azure.ai.contentsafety import ContentSafetyClient from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData from azure.core.credentials import AzureKeyCredential import base64

client = ContentSafetyClient(endpoint, AzureKeyCredential(key))

From file

with open("image.jpg", "rb") as f: image_data = base64.b64encode(f.read()).decode("utf-8")

request = AnalyzeImageOptions( image=ImageData(content=image_data) )

response = client.analyze_image(request)

for result in response.categories_analysis: print(f"{result.category}: severity {result.severity}")

Image from URL

from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData

request = AnalyzeImageOptions( image=ImageData(blob_url="https://example.com/image.jpg") )

response = client.analyze_image(request)

Text Blocklist Management

Create Blocklist

from azure.ai.contentsafety import BlocklistClient from azure.ai.contentsafety.models import TextBlocklist from azure.core.credentials import AzureKeyCredential

blocklist_client = BlocklistClient(endpoint, AzureKeyCredential(key))

blocklist = TextBlocklist( blocklist_name="my-blocklist", description="Custom terms to block" )

result = blocklist_client.create_or_update_text_blocklist( blocklist_name="my-blocklist", options=blocklist )

Add Block Items

from azure.ai.contentsafety.models import AddOrUpdateTextBlocklistItemsOptions, TextBlocklistItem

items = AddOrUpdateTextBlocklistItemsOptions( blocklist_items=[ TextBlocklistItem(text="blocked-term-1"), TextBlocklistItem(text="blocked-term-2") ] )

result = blocklist_client.add_or_update_blocklist_items( blocklist_name="my-blocklist", options=items )

Analyze with Blocklist

from azure.ai.contentsafety.models import AnalyzeTextOptions

request = AnalyzeTextOptions( text="Text containing blocked-term-1", blocklist_names=["my-blocklist"], halt_on_blocklist_hit=True )

response = client.analyze_text(request)

if response.blocklists_match: for match in response.blocklists_match: print(f"Blocked: {match.blocklist_item_text}")

Severity Levels

Text analysis returns 4 severity levels (0, 2, 4, 6) by default. For 8 levels (0-7):

from azure.ai.contentsafety.models import AnalyzeTextOptions, AnalyzeTextOutputType

request = AnalyzeTextOptions( text="Your text", output_type=AnalyzeTextOutputType.EIGHT_SEVERITY_LEVELS )

Harm Categories

Category Description

Hate

Attacks based on identity (race, religion, gender, etc.)

Sexual

Sexual content, relationships, anatomy

Violence

Physical harm, weapons, injury

SelfHarm

Self-injury, suicide, eating disorders

Severity Scale

Level Text Range Image Range Meaning

0 Safe Safe No harmful content

2 Low Low Mild references

4 Medium Medium Moderate content

6 High High Severe content

Client Types

Client Purpose

ContentSafetyClient

Analyze text and images

BlocklistClient

Manage custom blocklists

Best Practices

  • Use blocklists for domain-specific terms

  • Set severity thresholds appropriate for your use case

  • Handle multiple categories — content can be harmful in multiple ways

  • Use halt_on_blocklist_hit for immediate rejection

  • Log analysis results for audit and improvement

  • Consider 8-severity mode for finer-grained control

  • Pre-moderate AI outputs before showing to users

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

github-issue-creator

No summary provided by upstream source.

Repository SourceNeeds Review
General

azure-appconfiguration-java

No summary provided by upstream source.

Repository SourceNeeds Review
General

copilot-sdk

No summary provided by upstream source.

Repository SourceNeeds Review
General

azure-compliance

No summary provided by upstream source.

Repository SourceNeeds Review