The Stream Dashboard is your command center for configuring and monitoring AI Moderation. Before diving into the details, it’s essential to understand how to navigate the interface and ensure you’re working within the correct app environment, especially if your organization manages multiple apps.
Selecting the Right App
When you log in to the Stream Dashboard, the first thing you check is which app environment you're in. Stream allows organizations to manage multiple apps, such as dev, staging, or production within a single account.
To select or switch apps:
- Look at the top-left corner of the dashboard for the App Selector dropdown.
- Click the dropdown to view all apps associated with your account.
- Select the correct app before making any changes, each app has its own moderation settings, policies, and queue.
Pro Tip: Use consistent naming conventions (e.g., “AppName-Prod”, “AppName-Staging”) to avoid misconfigurations across environments.
Dashboard Navigation Overview
The left-hand sidebar of the dashboard is your main navigation hub. Here’s a quick overview of where to find key moderation tools:
- Moderation → Overview
- Visualize moderation trends, volume, and reviewer activity over time to help optimize policies.
- Moderation → Users Queue
- View and manage flagged users in real-time. Filter by user ID, date, reporter, reason, etc.
- Moderation → Text Queue
- View and manage flagged text in real-time. Filter by user ID, category, language, type, etc.
- Moderation → Media Queue
- View and manage flagged media in real-time. Filter by user ID, content type, date, etc.
- Moderation → Policies
- Create, edit, and manage moderation policies and rules tied to AI, blocklists, and regex patterns.
- Moderation → Channel Explorer
- Browse and search across channels to review flagged content in context and identify patterns or high-risk areas.
- Moderation → Logs
- Access a historical log of all moderation decisions and system activity for compliance and transparency.
- Moderation → Semantic Filters
- Set up advanced filters that use AI to detect harmful content based on meaning and context, not just keywords.
- Moderation → Advanced Filters
- Combine blocklists, regex patterns, and custom rules to create precise filters for detecting and managing harmful content.
- Moderation → Preferences
- Configure global moderation settings, including default actions, queue behavior, and review workflows tailored to your platform’s needs.
- Moderation → Feeds Templates
- Define and manage reusable moderation templates for feeds, ensuring consistent policies and actions across multiple apps or channels.
Filter Tools
Use the filters within the various queues to quickly find users, flagged messages, or specific moderation events.
Within the Moderation Queue, apply filters by:
- Date
- Category
- Content Type
- Entity ID
- User ID
- Language
- Action
- Action By
- Reporter Type
- Team
This helps you and your team focus on high-priority content first.
Selecting the correct app and understanding dashboard layout is critical to avoiding costly mistakes. Once you’re in the right environment, the left-hand navigation and filtering tools give you quick access to every layer of moderation from policy design to real-time review.
What’s Next
Now that you know how to move through the dashboard and select the right environment, let’s dig into the most important Admin capabilities, including policies, harms, severity levels, and more.