Why the Queue Exists
The moderation queue is where AI-driven decisions and human judgment meet. Every message, media file, or user that Stream’s moderation system flags flows into this centralized review hub. Moderators spend most of their time here, reviewing items, applying actions, and closing cases.
What Content Flows into the Queue
The queue shows all content the system has acted on, regardless of the action:
- Flagged → marked as suspicious for review.
- Blocked → removed but logged for oversight.
- Shadowblocked → hidden from others but visible to the sender.
- Bounced → rejected content attempts.
- Masked → partially hidden (e.g., profanity replaced with ****).
Note: The only exception is No Action; the content will not appear in the queue if a rule is set to take no action.
Queues by Category
To help moderators work efficiently, the dashboard splits flagged items into three focused queues:
- Users Queue → Shows user accounts flagged due to repeated harmful behavior or policy violations.
- Text Queue → Shows flagged chat messages, posts, or comments from AI, blocklists, regex, or semantic filters.
- Media Queue → Shows flagged images and videos, along with confidence scores.
Each queue provides tailored metadata so moderators can make faster, better-informed decisions.
Information Provided for Each Item
When you click into a flagged item, the queue provides deep context:
- Message or Media Preview → The flagged content itself.
- Conversation Context → Surrounding messages, which is crucial for understanding tone, sarcasm, or coded language.
- User Metadata → User ID, role, account details, and whether this is a first-time or repeat offender.
- Moderation Labels → The harm categories (e.g., hate speech, self-harm).
- Actions Applied → The system’s initial enforcement (flag, block, shadowblock, bounce, mask).
- Severity Level (Text) → Low, Medium, High, or Critical, helping moderators triage urgency.
- Confidence Score (Media) → A percentage showing how sure the AI is about image/video violations.
- History → Past moderation events involving the same user or channel.
This data equips moderators to act quickly without guessing.
Moderator Tools and Actions
From the queue, moderators can:
- Mark as Reviewed → Approve the AI’s decision.
- Restore / Unblock → Approve safe content that was wrongly flagged.
- Delete Content → Permanently remove harmful material.
- 24h Channel Ban → User is banned from that specific channel for 24 hrs.
- 24h Global Ban → User is banned across all chat channels for 24 hrs.
- Channel Ban → User is permanently banned from that specific channel.
- Permanent Ban → User is permanently banned from all chat channels.
- Bulk Actions → Apply the same decision across multiple flagged items at once.
Filters and Search
The queue also provides powerful filters to help moderators prioritize:
- By Date (to review oldest or newest first).
- By Category (e.g., AI Text, Blocklists).
- By Content Type (e.g., chat messages, feed activities).
- By User ID or Entity ID (to track specific offenders).
- By Language (to narrow reviews in multilingual communities).
- By Action Taken (e.g., show only blocked items).
- By Action By (to show actions by specific moderators).
- By Reporter Type (e.g., Moderator, User, AI).
These filters prevent moderators from drowning in low-priority items and focus attention where it matters most.
Exploring Context with Channel Explorer
The Channel Explorer gives moderators a bird’s-eye view of what’s happening across all channels in your app. Unlike the moderation queue, which focuses on flagged items, Channel Explorer lets you proactively review live channels to spot emerging issues and take action directly in context.
With Channel Explorer you can:
- Browse all channels at once → See activity across your platform to identify hotspots.
- Drill into specific channels → Review conversation history to understand tone, intent, and repeated patterns.
- Act directly from the Explorer
- Review or delete harmful messages.
- Ban or suspend users active in the channel.
- Send a system message reminding users of community guidelines (e.g., “Reminder: Please keep discussion respectful and follow our community standards”).
Example: If moderators see a heated debate turning toxic in a high-volume channel, they can step in early by posting a gentle reminder, rather than waiting for violations to escalate and hit the queue.
Best Practices for Queue Management
- Leverage Filters: Don’t review linearly; filter by content type, category, or date.
- Watch Repeat Offenders: Use user metadata and history to escalate bans faster.
- Audit Regularly: Review flagged cases to improve your policies and thresholds over time.
The moderation queue is the heart of human oversight in Stream’s AI Moderation system. It gives moderators the context, tools, and filters they need to act decisively while ensuring that automated actions are transparent and auditable.
Next, we’ll explore how to manage workflows inside the queue using filters, search, and bulk review to stay efficient as moderation scales.