Getting Started

Stream's Moderation API lets you integrate content moderation into any application. This guide walks you through installation, configuration, and your first moderation check.

Installation

pip install getstream

Initialize the Client

from getstream import Stream

client = Stream(api_key="YOUR_API_KEY", api_secret="YOUR_API_SECRET")

Create a Moderation Policy

Before checking content, create a moderation configuration that defines which rules to apply:

client.moderation().upsert_config(
    key="my_config",
    ai_text_config={
        "rules": [
            {"label": "SPAM", "action": "flag"},
            {"label": "HARASSMENT", "action": "remove"},
        ],
    },
    block_list_config={
        "rules": [{"name": "profanity_en", "action": "remove"}],
    },
)

For Stream Chat, use config key chat:messaging. For Stream Feeds, use feeds. See Configuration for details.

Check Content

from getstream.models import ModerationPayload

response = client.moderation().check(
    entity_type="stream:chat:v1:message",
    entity_id="message-123",
    entity_creator_id="user-456",
    moderation_payload=ModerationPayload(
        texts=["Hello, this is a test message"],
    ),
    config_key="my_config",
)

print(response.data.recommended_action)  # "keep", "flag", or "remove"

Handle the Response

The response includes:

  • recommended_action -- "keep", "flag", or "remove"
  • status -- "complete" or "partial" (if async checks are still running)
  • item -- the review queue item (if content was flagged or removed)
action = response.data.recommended_action

if action == "keep":
    # Content is safe, no action needed
    pass
elif action == "flag":
    # Content is suspicious, send to review queue
    print("Flagged for review:", response.data.item)
elif action == "remove":
    # Content violates policies, remove it
    print("Content removed:", response.data.item)

Next Steps