Automated Content Safety API (ACS)

Introduction

The Compliance API uses AI-powered reviewers to automatically analyze content for compliance with various safety, legal, and community standards. Each piece of content is reviewed by multiple AI agents working in parallel, providing robust and reliable moderation decisions.

The API supports multiple content types including text, images, videos, and conversations, making it suitable for diverse applications.

Try the Interactive Playground →

Authentication

All API requests require authentication using a Bearer token. Include your API key in the Authorization header of every request:

Authorization: Bearer YOUR_API_KEY
Note: Each compliance check consumes credits based on the number of reviewers used and the content type analyzed.

Content Types

The API provides specialized endpoints for different content types. Each endpoint is optimized for its specific use case:

EndpointContent TypeUse Case
POST /api/v1/compliance/textGeneral TextArticles, posts, descriptions
POST /api/v1/compliance/commentCommentsUser comments, reviews, feedback
POST /api/v1/compliance/chatChat MessagesReal-time chat, messaging
POST /api/v1/compliance/conversationConversationsMulti-turn dialogues, support tickets
POST /api/v1/compliance/searchSearch QueriesSearch terms, queries
POST /api/v1/compliance/imageImagesUser uploads, profile pictures
POST /api/v1/compliance/videoVideosVideo content, livestreams
Note: While all endpoints use the same underlying compliance engine, choosing the appropriate endpoint ensures your content is evaluated with the most relevant context and rules.

Compliance Check

The core functionality of the API is to check content for compliance. All endpoints follow the same request/response structure with content-type-specific optimizations.

Request

Send a POST request to the appropriate endpoint with a JSON body containing your content and configuration parameters.

Required Parameters

ParameterTypeDescription
contentstring/arrayThe content to analyze. Format varies by content type.

Optional Parameters

ParameterTypeDefaultDescription
amountinteger10Number of AI reviewers to use (1-10). More reviewers = higher accuracy but increased cost.
contextstringnullAdditional context for reviewers (e.g., platform name, user profile info).
decision_methodstring"average"How to aggregate reviewer decisions: average, any, all, or score.
excluded_categoriesarray[]Array of rule IDs to exclude from evaluation (e.g., ["A1", "C3"]).
custom_promptstringnullCustom instructions to replace default compliance rules.
verboseinteger1Response detail level: 0 (basic), 1 (standard), 2 (detailed with individual reviewers).

Example Request (Text)

{
  "content": "This is the text content to analyze for compliance.",
  "amount": 10,
  "context": "User comment on product review website",
  "decision_method": "average",
  "verbose": 2
}

Example Request (Conversation)

{
  "content": [
    {
      "role": "user",
      "content": "Hello! Can you help me?"
    },
    {
      "role": "assistant",
      "content": "Of course! How can I assist you today?"
    },
    {
      "role": "user",
      "content": "I need information about your products."
    }
  ],
  "amount": 8,
  "decision_method": "average",
  "verbose": 2
}

Example Request (Image URL)

{
  "content": "https://example.com/image.jpg",
  "amount": 10,
  "context": "User profile picture upload",
  "verbose": 2
}

Example Request (Base64 Image)

{
  "content": "data:image/jpeg;base64,/9j/4AAQSkZJRg...",
  "amount": 10,
  "verbose": 2
}

Response

A successful request returns a JSON object with the compliance decision and supporting details:

{
  "decision": "safe",
  "reason": "Content complies with all guidelines. No policy violations detected.",
  "score": 0,
  "decision_method": "average",
  "flagged_categories": [],
  "agent_decisions": [
    {
      "agent_name": "Reviewer 1",
      "decision": "safe",
      "reason": "No violations found in content.",
      "flagged_categories": [],
      "time": 1.234
    }
  ],
  "execution_time": 2.456
}

Response Fields

FieldTypeDescription
decisionstringFinal decision: safe or unsafe
reasonstringExplanation for the decision (verbose ≥ 1)
scoreintegerNumber of reviewers who flagged content as unsafe (verbose ≥ 1)
decision_methodstringThe decision method used (verbose ≥ 1)
flagged_categoriesarrayList of flagged compliance categories with scores (verbose ≥ 1)
agent_decisionsarrayIndividual reviewer decisions and analysis (verbose = 2)
execution_timefloatTotal processing time in seconds

Flagged Category Object

{
  "id": "H1",
  "name": "Hate Speech",
  "score": 4.2,
  "count": 8
}
FieldTypeDescription
idstringUnique identifier for the compliance rule
namestringHuman-readable name of the rule
scorefloatAverage confidence score (0-5) from reviewers
countintegerNumber of reviewers who flagged this category

Compliance Rules

The API evaluates content against a comprehensive set of compliance rules organized into categories. Each category represents a specific type of policy violation or risk.

Loading rules...
Customization: You can exclude specific categories using the excluded_categories parameter, or completely replace the rules with your own using custom_prompt.

Decision Methods

The API supports multiple methods for aggregating decisions from multiple reviewers:

Average (Default)

Content is marked as UNSAFE if 50% or more reviewers flag it.

{
  "decision_method": "average"
}

Best for: General content moderation with balanced sensitivity.

Any

Content is marked as UNSAFE if any single reviewer flags it.

{
  "decision_method": "any"
}

Best for: High-risk scenarios requiring maximum sensitivity (e.g., financial services, healthcare).

All

Content is marked as UNSAFE only if all reviewers unanimously flag it.

{
  "decision_method": "all"
}

Best for: Scenarios where false positives are costly (e.g., public forums with strong free speech values).

Score

Uses weighted scoring based on category severity and reviewer confidence. Provides the most nuanced decision.

{
  "decision_method": "score"
}

Best for: Complex content requiring granular risk assessment.

Recommendation: Start with average and adjust based on your specific needs and observed false positive/negative rates.

Multimodal Content

The API supports analyzing images and videos alongside text content.

Images

Images can be provided as URLs or base64-encoded data:

{
  "content": "https://example.com/user-upload.jpg",
  "amount": 10,
  "context": "User profile picture",
  "verbose": 2
}
{
  "content": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEAYABgAAD...",
  "amount": 10,
  "context": "User profile picture",
  "verbose": 2
}

Videos

Videos follow the same format as images and can be provided as URLs or base64-encoded data:

{
  "content": "https://example.com/video.mp4",
  "amount": 8,
  "context": "User uploaded video content",
  "verbose": 2
}
Performance Note: Image and video analysis typically takes longer than text analysis and may consume more credits. Consider using fewer reviewers (amount: 5-7) for faster responses if real-time performance is critical.

Customization

Excluding Categories

You can exclude specific compliance rules that don't apply to your use case:

{
  "content": "Your content here",
  "excluded_categories": ["A1", "P2", "S3"],
  "amount": 10
}

Custom Prompts

For specialized moderation needs, you can provide custom instructions that replace the default compliance rules:

{
  "content": "Your content here",
  "custom_prompt": "Evaluate this content for violations of our company's specific guidelines: 1) No competitor mentions, 2) No pricing discussions, 3) Professional tone required.",
  "amount": 10
}
Note: When using custom_prompt, the default compliance rules are completely replaced. Ensure your custom instructions are comprehensive.

Context Enhancement

Providing context helps reviewers make more informed decisions:

{
  "content": "Your content here",
  "context": "This is a comment on a tech support forum. The user is discussing a software issue. Our platform allows technical discussions including code snippets and error messages.",
  "amount": 10
}

Error Handling

When an error occurs, the API returns an appropriate HTTP status code and a JSON error object:

{
  "error": true,
  "message": "Invalid API key provided",
  "code": "invalid_api_key"
}

Common Error Codes

Status CodeError CodeDescription
400invalid_requestMissing required parameters or invalid request format
400invalid_contentContent format is invalid for the specified content type
401invalid_api_keyMissing or invalid API key
402insufficient_creditsAccount balance too low to process request
413content_too_largeContent exceeds maximum size limit
422invalid_parameterParameter value out of acceptable range
429rate_limit_exceededToo many requests, please slow down
500internal_errorInternal server error, try again later

Handling Failed Reviewers

In rare cases, individual reviewers may fail (timeout, API error, etc.). These are included in the response with decision: "failed" when verbose = 2:

{
  "agent_decisions": [
    {
      "agent_name": "Reviewer 3",
      "decision": "failed",
      "reason": "Timeout after 30 seconds",
      "time": 30.0
    }
  ]
}

Failed reviewers are excluded from the final decision calculation. If too many reviewers fail, consider:

  • Reducing the amount parameter
  • Simplifying or shortening the content
  • Retrying the request after a brief delay

Code Examples

Here are examples of how to use the Compliance API in different programming languages.

import requests

API_KEY = "your_api_key_here"
BASE_URL = "https://test-compliance-api.umbrosus.com/api/v1/compliance"

def check_text_compliance(content, amount=10):
    """Check text content for compliance"""
    response = requests.post(
        f"{BASE_URL}/text",
        headers={
            "Authorization": f"Bearer {API_KEY}",
            "Content-Type": "application/json"
        },
        json={
            "content": content,
            "amount": amount,
            "decision_method": "average",
            "verbose": 2
        }
    )
    return response.json()

def check_image_compliance(image_url, context=None):
    """Check image content for compliance"""
    payload = {
        "content": image_url,
        "amount": 10,
        "verbose": 2
    }
    
    if context:
        payload["context"] = context
    
    response = requests.post(
        f"{BASE_URL}/image",
        headers={
            "Authorization": f"Bearer {API_KEY}",
            "Content-Type": "application/json"
        },
        json=payload
    )
    return response.json()

def check_conversation_compliance(messages):
    """Check conversation for compliance"""
    response = requests.post(
        f"{BASE_URL}/conversation",
        headers={
            "Authorization": f"Bearer {API_KEY}",
            "Content-Type": "application/json"
        },
        json={
            "content": messages,
            "amount": 8,
            "decision_method": "average",
            "verbose": 2
        }
    )
    return response.json()

# Example usage
if __name__ == "__main__":
    # Check text
    result = check_text_compliance(
        "This is a user comment that needs to be reviewed."
    )
    print(f"Decision: {result['decision']}")
    print(f"Reason: {result['reason']}")
    
    # Check image
    image_result = check_image_compliance(
        "https://example.com/user-upload.jpg",
        context="Profile picture upload"
    )
    print(f"Image Decision: {image_result['decision']}")
    
    # Check conversation
    conversation = [
        {"role": "user", "content": "Hello!"},
        {"role": "assistant", "content": "Hi! How can I help?"},
        {"role": "user", "content": "I need some information."}
    ]
    conv_result = check_conversation_compliance(conversation)
    print(f"Conversation Decision: {conv_result['decision']}")
const API_KEY = 'your_api_key_here';
const BASE_URL = 'https://test-compliance-api.umbrosus.com/api/v1/compliance';

async function checkTextCompliance(content, amount = 10) {
  const response = await fetch(`${BASE_URL}/text`, {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${API_KEY}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      content,
      amount,
      decision_method: 'average',
      verbose: 2
    })
  });
  
  return await response.json();
}

async function checkImageCompliance(imageUrl, context = null) {
  const payload = {
    content: imageUrl,
    amount: 10,
    verbose: 2
  };
  
  if (context) {
    payload.context = context;
  }
  
  const response = await fetch(`${BASE_URL}/image`, {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${API_KEY}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify(payload)
  });
  
  return await response.json();
}

async function checkConversationCompliance(messages) {
  const response = await fetch(`${BASE_URL}/conversation`, {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${API_KEY}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      content: messages,
      amount: 8,
      decision_method: 'average',
      verbose: 2
    })
  });
  
  return await response.json();
}

// Example usage
(async () => {
  // Check text
  const result = await checkTextCompliance(
    'This is a user comment that needs to be reviewed.'
  );
  console.log('Decision:', result.decision);
  console.log('Reason:', result.reason);
  
  // Check image
  const imageResult = await checkImageCompliance(
    'https://example.com/user-upload.jpg',
    'Profile picture upload'
  );
  console.log('Image Decision:', imageResult.decision);
  
  // Check conversation
  const conversation = [
    { role: 'user', content: 'Hello!' },
    { role: 'assistant', content: 'Hi! How can I help?' },
    { role: 'user', content: 'I need some information.' }
  ];
  const convResult = await checkConversationCompliance(conversation);
  console.log('Conversation Decision:', convResult.decision);
})();
# Check text content
curl -X POST "https://test-compliance-api.umbrosus.com/api/v1/compliance/text"   -H "Authorization: Bearer YOUR_API_KEY"   -H "Content-Type: application/json"   -d '{
    "content": "This is a user comment that needs to be reviewed.",
    "amount": 10,
    "decision_method": "average",
    "verbose": 2
  }'

# Check image by URL
curl -X POST "https://test-compliance-api.umbrosus.com/api/v1/compliance/image"   -H "Authorization: Bearer YOUR_API_KEY"   -H "Content-Type: application/json"   -d '{
    "content": "https://example.com/user-upload.jpg",
    "amount": 10,
    "context": "Profile picture upload",
    "verbose": 2
  }'

# Check conversation
curl -X POST "https://test-compliance-api.umbrosus.com/api/v1/compliance/conversation"   -H "Authorization: Bearer YOUR_API_KEY"   -H "Content-Type: application/json"   -d '{
    "content": [
      {"role": "user", "content": "Hello!"},
      {"role": "assistant", "content": "Hi! How can I help?"},
      {"role": "user", "content": "I need some information."}
    ],
    "amount": 8,
    "decision_method": "average",
    "verbose": 2
  }'

# With custom prompt
curl -X POST "https://test-compliance-api.umbrosus.com/api/v1/compliance/text"   -H "Authorization: Bearer YOUR_API_KEY"   -H "Content-Type: application/json"   -d '{
    "content": "Product review content here",
    "custom_prompt": "Check if this review violates our guidelines: no competitor mentions, no pricing discussions, must be relevant to the product.",
    "amount": 10,
    "verbose": 2
  }'
<?php

class ComplianceAPI {
    private $apiKey;
    private $baseUrl;
    
    public function __construct($apiKey, $baseUrl) {
        $this->apiKey = $apiKey;
        $this->baseUrl = $baseUrl;
    }
    
    private function makeRequest($endpoint, $data) {
        $ch = curl_init($this->baseUrl . $endpoint);
        curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
        curl_setopt($ch, CURLOPT_POST, true);
        curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($data));
        curl_setopt($ch, CURLOPT_HTTPHEADER, [
            'Authorization: Bearer ' . $this->apiKey,
            'Content-Type: application/json'
        ]);
        
        $response = curl_exec($ch);
        $httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
        curl_close($ch);
        
        if ($httpCode !== 200) {
            throw new Exception("API returned status code: " . $httpCode);
        }
        
        return json_decode($response, true);
    }
    
    public function checkText($content, $amount = 10, $context = null) {
        $data = [
            'content' => $content,
            'amount' => $amount,
            'decision_method' => 'average',
            'verbose' => 2
        ];
        
        if ($context) {
            $data['context'] = $context;
        }
        
        return $this->makeRequest('/text', $data);
    }
    
    public function checkImage($imageUrl, $context = null) {
        $data = [
            'content' => $imageUrl,
            'amount' => 10,
            'verbose' => 2
        ];
        
        if ($context) {
            $data['context'] = $context;
        }
        
        return $this->makeRequest('/image', $data);
    }
    
    public function checkConversation($messages, $amount = 8) {
        return $this->makeRequest('/conversation', [
            'content' => $messages,
            'amount' => $amount,
            'decision_method' => 'average',
            'verbose' => 2
        ]);
    }
}

// Example usage
$api = new ComplianceAPI(
    'your_api_key_here',
    'https://test-compliance-api.umbrosus.com/api/v1/compliance'
);

try {
    // Check text
    $result = $api->checkText(
        'This is a user comment that needs to be reviewed.',
        10,
        'User comment on product page'
    );
    echo "Decision: " . $result['decision'] . "\n";
    echo "Reason: " . $result['reason'] . "\n";
    
    // Check image
    $imageResult = $api->checkImage(
        'https://example.com/user-upload.jpg',
        'Profile picture upload'
    );
    echo "Image Decision: " . $imageResult['decision'] . "\n";
    
    // Check conversation
    $conversation = [
        ['role' => 'user', 'content' => 'Hello!'],
        ['role' => 'assistant', 'content' => 'Hi! How can I help?'],
        ['role' => 'user', 'content' => 'I need some information.']
    ];
    $convResult = $api->checkConversation($conversation);
    echo "Conversation Decision: " . $convResult['decision'] . "\n";
    
} catch (Exception $e) {
    echo "Error: " . $e->getMessage() . "\n";
}

?>

Best Practices

  • Start with verbose=1: Only use verbose=2 when you need detailed reviewer breakdowns for debugging or analytics.
  • Use appropriate endpoints: Choose the content-type-specific endpoint for better contextual analysis.
  • Provide context: Adding context about the content source improves accuracy significantly.
  • Handle errors gracefully: Implement retry logic for transient errors (5xx) and handle rate limits (429).
  • Optimize reviewer count: For high-volume applications, consider using 5-7 reviewers instead of 10 to reduce costs.
  • Cache results: For static content, cache compliance results to avoid redundant checks.