Automated Content Safety API (ACS)
Introduction
The Compliance API uses AI-powered reviewers to automatically analyze content for compliance with various safety, legal, and community standards. Each piece of content is reviewed by multiple AI agents working in parallel, providing robust and reliable moderation decisions.
The API supports multiple content types including text, images, videos, and conversations, making it suitable for diverse applications.
Try the Interactive Playground →Authentication
All API requests require authentication using a Bearer token. Include your API key in the Authorization header of every request:
Authorization: Bearer YOUR_API_KEY
Content Types
The API provides specialized endpoints for different content types. Each endpoint is optimized for its specific use case:
| Endpoint | Content Type | Use Case |
|---|---|---|
POST /api/v1/compliance/text | General Text | Articles, posts, descriptions |
POST /api/v1/compliance/comment | Comments | User comments, reviews, feedback |
POST /api/v1/compliance/chat | Chat Messages | Real-time chat, messaging |
POST /api/v1/compliance/conversation | Conversations | Multi-turn dialogues, support tickets |
POST /api/v1/compliance/search | Search Queries | Search terms, queries |
POST /api/v1/compliance/image | Images | User uploads, profile pictures |
POST /api/v1/compliance/video | Videos | Video content, livestreams |
Compliance Check
The core functionality of the API is to check content for compliance. All endpoints follow the same request/response structure with content-type-specific optimizations.
Request
Send a POST request to the appropriate endpoint with a JSON body containing your content and configuration parameters.
Required Parameters
| Parameter | Type | Description |
|---|---|---|
content | string/array | The content to analyze. Format varies by content type. |
Optional Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
amount | integer | 10 | Number of AI reviewers to use (1-10). More reviewers = higher accuracy but increased cost. |
context | string | null | Additional context for reviewers (e.g., platform name, user profile info). |
decision_method | string | "average" | How to aggregate reviewer decisions: average, any, all, or score. |
excluded_categories | array | [] | Array of rule IDs to exclude from evaluation (e.g., ["A1", "C3"]). |
custom_prompt | string | null | Custom instructions to replace default compliance rules. |
verbose | integer | 1 | Response detail level: 0 (basic), 1 (standard), 2 (detailed with individual reviewers). |
Example Request (Text)
{
"content": "This is the text content to analyze for compliance.",
"amount": 10,
"context": "User comment on product review website",
"decision_method": "average",
"verbose": 2
}
Example Request (Conversation)
{
"content": [
{
"role": "user",
"content": "Hello! Can you help me?"
},
{
"role": "assistant",
"content": "Of course! How can I assist you today?"
},
{
"role": "user",
"content": "I need information about your products."
}
],
"amount": 8,
"decision_method": "average",
"verbose": 2
}
Example Request (Image URL)
{
"content": "https://example.com/image.jpg",
"amount": 10,
"context": "User profile picture upload",
"verbose": 2
}
Example Request (Base64 Image)
{
"content": "data:image/jpeg;base64,/9j/4AAQSkZJRg...",
"amount": 10,
"verbose": 2
}
Response
A successful request returns a JSON object with the compliance decision and supporting details:
{
"decision": "safe",
"reason": "Content complies with all guidelines. No policy violations detected.",
"score": 0,
"decision_method": "average",
"flagged_categories": [],
"agent_decisions": [
{
"agent_name": "Reviewer 1",
"decision": "safe",
"reason": "No violations found in content.",
"flagged_categories": [],
"time": 1.234
}
],
"execution_time": 2.456
}
Response Fields
| Field | Type | Description |
|---|---|---|
decision | string | Final decision: safe or unsafe |
reason | string | Explanation for the decision (verbose ≥ 1) |
score | integer | Number of reviewers who flagged content as unsafe (verbose ≥ 1) |
decision_method | string | The decision method used (verbose ≥ 1) |
flagged_categories | array | List of flagged compliance categories with scores (verbose ≥ 1) |
agent_decisions | array | Individual reviewer decisions and analysis (verbose = 2) |
execution_time | float | Total processing time in seconds |
Flagged Category Object
{
"id": "H1",
"name": "Hate Speech",
"score": 4.2,
"count": 8
}
| Field | Type | Description |
|---|---|---|
id | string | Unique identifier for the compliance rule |
name | string | Human-readable name of the rule |
score | float | Average confidence score (0-5) from reviewers |
count | integer | Number of reviewers who flagged this category |
Compliance Rules
The API evaluates content against a comprehensive set of compliance rules organized into categories. Each category represents a specific type of policy violation or risk.
excluded_categories parameter, or completely replace the rules with your own using custom_prompt.
Decision Methods
The API supports multiple methods for aggregating decisions from multiple reviewers:
Average (Default)
Content is marked as UNSAFE if 50% or more reviewers flag it.
{
"decision_method": "average"
}
Best for: General content moderation with balanced sensitivity.
Any
Content is marked as UNSAFE if any single reviewer flags it.
{
"decision_method": "any"
}
Best for: High-risk scenarios requiring maximum sensitivity (e.g., financial services, healthcare).
All
Content is marked as UNSAFE only if all reviewers unanimously flag it.
{
"decision_method": "all"
}
Best for: Scenarios where false positives are costly (e.g., public forums with strong free speech values).
Score
Uses weighted scoring based on category severity and reviewer confidence. Provides the most nuanced decision.
{
"decision_method": "score"
}
Best for: Complex content requiring granular risk assessment.
average and adjust based on your specific needs and observed false positive/negative rates.
Multimodal Content
The API supports analyzing images and videos alongside text content.
Images
Images can be provided as URLs or base64-encoded data:
{
"content": "https://example.com/user-upload.jpg",
"amount": 10,
"context": "User profile picture",
"verbose": 2
}
{
"content": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEAYABgAAD...",
"amount": 10,
"context": "User profile picture",
"verbose": 2
}
Videos
Videos follow the same format as images and can be provided as URLs or base64-encoded data:
{
"content": "https://example.com/video.mp4",
"amount": 8,
"context": "User uploaded video content",
"verbose": 2
}
Customization
Excluding Categories
You can exclude specific compliance rules that don't apply to your use case:
{
"content": "Your content here",
"excluded_categories": ["A1", "P2", "S3"],
"amount": 10
}
Custom Prompts
For specialized moderation needs, you can provide custom instructions that replace the default compliance rules:
{
"content": "Your content here",
"custom_prompt": "Evaluate this content for violations of our company's specific guidelines: 1) No competitor mentions, 2) No pricing discussions, 3) Professional tone required.",
"amount": 10
}
custom_prompt, the default compliance rules are completely replaced. Ensure your custom instructions are comprehensive.
Context Enhancement
Providing context helps reviewers make more informed decisions:
{
"content": "Your content here",
"context": "This is a comment on a tech support forum. The user is discussing a software issue. Our platform allows technical discussions including code snippets and error messages.",
"amount": 10
}
Error Handling
When an error occurs, the API returns an appropriate HTTP status code and a JSON error object:
{
"error": true,
"message": "Invalid API key provided",
"code": "invalid_api_key"
}
Common Error Codes
| Status Code | Error Code | Description |
|---|---|---|
400 | invalid_request | Missing required parameters or invalid request format |
400 | invalid_content | Content format is invalid for the specified content type |
401 | invalid_api_key | Missing or invalid API key |
402 | insufficient_credits | Account balance too low to process request |
413 | content_too_large | Content exceeds maximum size limit |
422 | invalid_parameter | Parameter value out of acceptable range |
429 | rate_limit_exceeded | Too many requests, please slow down |
500 | internal_error | Internal server error, try again later |
Handling Failed Reviewers
In rare cases, individual reviewers may fail (timeout, API error, etc.). These are included in the response with decision: "failed" when verbose = 2:
{
"agent_decisions": [
{
"agent_name": "Reviewer 3",
"decision": "failed",
"reason": "Timeout after 30 seconds",
"time": 30.0
}
]
}
Failed reviewers are excluded from the final decision calculation. If too many reviewers fail, consider:
- Reducing the
amountparameter - Simplifying or shortening the content
- Retrying the request after a brief delay
Code Examples
Here are examples of how to use the Compliance API in different programming languages.
import requests
API_KEY = "your_api_key_here"
BASE_URL = "https://test-compliance-api.umbrosus.com/api/v1/compliance"
def check_text_compliance(content, amount=10):
"""Check text content for compliance"""
response = requests.post(
f"{BASE_URL}/text",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
},
json={
"content": content,
"amount": amount,
"decision_method": "average",
"verbose": 2
}
)
return response.json()
def check_image_compliance(image_url, context=None):
"""Check image content for compliance"""
payload = {
"content": image_url,
"amount": 10,
"verbose": 2
}
if context:
payload["context"] = context
response = requests.post(
f"{BASE_URL}/image",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
},
json=payload
)
return response.json()
def check_conversation_compliance(messages):
"""Check conversation for compliance"""
response = requests.post(
f"{BASE_URL}/conversation",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
},
json={
"content": messages,
"amount": 8,
"decision_method": "average",
"verbose": 2
}
)
return response.json()
# Example usage
if __name__ == "__main__":
# Check text
result = check_text_compliance(
"This is a user comment that needs to be reviewed."
)
print(f"Decision: {result['decision']}")
print(f"Reason: {result['reason']}")
# Check image
image_result = check_image_compliance(
"https://example.com/user-upload.jpg",
context="Profile picture upload"
)
print(f"Image Decision: {image_result['decision']}")
# Check conversation
conversation = [
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hi! How can I help?"},
{"role": "user", "content": "I need some information."}
]
conv_result = check_conversation_compliance(conversation)
print(f"Conversation Decision: {conv_result['decision']}")
const API_KEY = 'your_api_key_here';
const BASE_URL = 'https://test-compliance-api.umbrosus.com/api/v1/compliance';
async function checkTextCompliance(content, amount = 10) {
const response = await fetch(`${BASE_URL}/text`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
content,
amount,
decision_method: 'average',
verbose: 2
})
});
return await response.json();
}
async function checkImageCompliance(imageUrl, context = null) {
const payload = {
content: imageUrl,
amount: 10,
verbose: 2
};
if (context) {
payload.context = context;
}
const response = await fetch(`${BASE_URL}/image`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify(payload)
});
return await response.json();
}
async function checkConversationCompliance(messages) {
const response = await fetch(`${BASE_URL}/conversation`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
content: messages,
amount: 8,
decision_method: 'average',
verbose: 2
})
});
return await response.json();
}
// Example usage
(async () => {
// Check text
const result = await checkTextCompliance(
'This is a user comment that needs to be reviewed.'
);
console.log('Decision:', result.decision);
console.log('Reason:', result.reason);
// Check image
const imageResult = await checkImageCompliance(
'https://example.com/user-upload.jpg',
'Profile picture upload'
);
console.log('Image Decision:', imageResult.decision);
// Check conversation
const conversation = [
{ role: 'user', content: 'Hello!' },
{ role: 'assistant', content: 'Hi! How can I help?' },
{ role: 'user', content: 'I need some information.' }
];
const convResult = await checkConversationCompliance(conversation);
console.log('Conversation Decision:', convResult.decision);
})();
# Check text content
curl -X POST "https://test-compliance-api.umbrosus.com/api/v1/compliance/text" -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{
"content": "This is a user comment that needs to be reviewed.",
"amount": 10,
"decision_method": "average",
"verbose": 2
}'
# Check image by URL
curl -X POST "https://test-compliance-api.umbrosus.com/api/v1/compliance/image" -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{
"content": "https://example.com/user-upload.jpg",
"amount": 10,
"context": "Profile picture upload",
"verbose": 2
}'
# Check conversation
curl -X POST "https://test-compliance-api.umbrosus.com/api/v1/compliance/conversation" -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{
"content": [
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hi! How can I help?"},
{"role": "user", "content": "I need some information."}
],
"amount": 8,
"decision_method": "average",
"verbose": 2
}'
# With custom prompt
curl -X POST "https://test-compliance-api.umbrosus.com/api/v1/compliance/text" -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{
"content": "Product review content here",
"custom_prompt": "Check if this review violates our guidelines: no competitor mentions, no pricing discussions, must be relevant to the product.",
"amount": 10,
"verbose": 2
}'
<?php
class ComplianceAPI {
private $apiKey;
private $baseUrl;
public function __construct($apiKey, $baseUrl) {
$this->apiKey = $apiKey;
$this->baseUrl = $baseUrl;
}
private function makeRequest($endpoint, $data) {
$ch = curl_init($this->baseUrl . $endpoint);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($data));
curl_setopt($ch, CURLOPT_HTTPHEADER, [
'Authorization: Bearer ' . $this->apiKey,
'Content-Type: application/json'
]);
$response = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if ($httpCode !== 200) {
throw new Exception("API returned status code: " . $httpCode);
}
return json_decode($response, true);
}
public function checkText($content, $amount = 10, $context = null) {
$data = [
'content' => $content,
'amount' => $amount,
'decision_method' => 'average',
'verbose' => 2
];
if ($context) {
$data['context'] = $context;
}
return $this->makeRequest('/text', $data);
}
public function checkImage($imageUrl, $context = null) {
$data = [
'content' => $imageUrl,
'amount' => 10,
'verbose' => 2
];
if ($context) {
$data['context'] = $context;
}
return $this->makeRequest('/image', $data);
}
public function checkConversation($messages, $amount = 8) {
return $this->makeRequest('/conversation', [
'content' => $messages,
'amount' => $amount,
'decision_method' => 'average',
'verbose' => 2
]);
}
}
// Example usage
$api = new ComplianceAPI(
'your_api_key_here',
'https://test-compliance-api.umbrosus.com/api/v1/compliance'
);
try {
// Check text
$result = $api->checkText(
'This is a user comment that needs to be reviewed.',
10,
'User comment on product page'
);
echo "Decision: " . $result['decision'] . "\n";
echo "Reason: " . $result['reason'] . "\n";
// Check image
$imageResult = $api->checkImage(
'https://example.com/user-upload.jpg',
'Profile picture upload'
);
echo "Image Decision: " . $imageResult['decision'] . "\n";
// Check conversation
$conversation = [
['role' => 'user', 'content' => 'Hello!'],
['role' => 'assistant', 'content' => 'Hi! How can I help?'],
['role' => 'user', 'content' => 'I need some information.']
];
$convResult = $api->checkConversation($conversation);
echo "Conversation Decision: " . $convResult['decision'] . "\n";
} catch (Exception $e) {
echo "Error: " . $e->getMessage() . "\n";
}
?>
Best Practices
- Start with verbose=1: Only use verbose=2 when you need detailed reviewer breakdowns for debugging or analytics.
- Use appropriate endpoints: Choose the content-type-specific endpoint for better contextual analysis.
- Provide context: Adding context about the content source improves accuracy significantly.
- Handle errors gracefully: Implement retry logic for transient errors (5xx) and handle rate limits (429).
- Optimize reviewer count: For high-volume applications, consider using 5-7 reviewers instead of 10 to reduce costs.
- Cache results: For static content, cache compliance results to avoid redundant checks.