Stream’s Dashboard gives Admins full control over how AI Moderation behaves, from defining policies to tuning system behavior. As an Admin, you’re effectively designing the blueprint of how harmful content is detected, categorized, and acted upon.
This section provides a high-level walkthrough of what you can configure directly within the dashboard as an Admin.
Moderation Policies & Rules
Policies are the foundation of your moderation system. They define what the AI looks for and how it should respond.
As an Admin, you can:
- Create and manage policies that define what types of content should be flagged.
- Set up rule types including AI-based detection, blocklists, and regex patterns.
- Assign different policies to specific apps, channels, or user types.
Each policy acts like a playbook for how AI should analyze content, so it’s critical to structure them around your unique use case.
Harms and Actions
Policies are composed of harms (the types of violation) and actions ( what to do about them).
Admins can:
- Create and customize moderation harms (e.g., hate speech, spam, grooming).
- Set up actions to tell the AI what to do (e.g., flag, block).
- For example, you might configure insult to be flagged for moderator review, while sexual harassment is always blocked immediately.
This framework ensures that risks are prioritized appropriately and moderators don’t waste time on low-harm cases.
Severity Levels & Confidence Scores
Not all violations are created equal. Which is why Stream offers severity levels and confidence scores to calibrate moderation sensitivity.
Admins can tune the sensitivity of the AI by adjusting:
- Confidence thresholds for images and videos.
- Severity levels to categorize content from low to critical risk.
- For example, you might choose to auto-block explicit sexual content only when confidence is >95%. This flexibility balances automation with oversight.
Notifications & Alerts
Timely response is critical for high-risk violations. Admins can set up webhook alters to make sure the right people are notified when serious harms occur.
Admins control how and when reviewers are alerted:
- Configure webhook alerts for high-severity content that trigger external systems.
- Set reviewer assignments based on category or tag.
This ensures critical issues are surfaced immediately to your team.
Audit Logs & Insights
Every decision matters, especially for accountability, continuous improvement, and, more importantly, compliance.
All moderation actions are tracked in the audit log. Admins can:
- View detailed history of decisions.
- Track reviewer activity for accountability.
- Use data from the Overview tab to spot trends and refine rules.
Audit logs demonstrate compliance with legal and industry requirements and provide a record of how your team is enforcing rules.
The Admin dashboard gives you deep control without needing code. By configuring policies, rules, harms, actions, and alerts, you define exactly how your AI Moderation system operates and create a framework your moderation team can rely on.
In the coming lessons, we will dive into setting rules, policies, severity levels, and more.