Moderating user-generated content at scale is difficult, especially as harmful behavior patterns evolve. Rule Builder provides a new way to define complex moderation logic without writing a single line of code.
What is Rule Builder?
Rule Builder is a new dashboard feature within Stream's AI Moderation platform that lets you:
- Define moderation rules based on combinations of conditions like toxic text + repeated offenses + new account age.
- Use AND/OR logic to capture complex behavior patterns.
- Automatically trigger actions like flagging, banning, or shadow banning based on defined thresholds.
- Combine multiple signal types: text, images, video, account metadata, and volume metrics.
All without code.
Example Use Cases
Detect repeat harassment:
Ban users who post 5+ harmful messages, including hate speech, harassment, threats, or 50+ profane messages, within 24 hours.
Stop bot-driven spam:
Shadow ban new accounts that post 3+ spam messages within 1 hour and are less than 24 hours old.
Prevent message flooding:
Flag users who send 50+ messages in 1 hour and include 5+ messages containing URLs.
Enforce zero-tolerance policies:
Permanently ban users who post a single message tied to terrorism or upload one violent or hate-symbol image.
Detect coordinated abuse:
Flag users who send 100+ messages in 24 hours, including 10+ spam/scam messages and 5+ with URLs.
Advantages of Rule Builder
Most moderation systems are either too rigid or require constant engineering involvement to update and refine. Rule Builder is designed to give trust and safety teams the flexibility and control they need to respond to evolving behavior patterns without touching code or slowing down release cycles.
Translate policy into automated logic:
Rule Builder allows non-technical teams to define complex moderation logic that maps directly to their internal policies. Instead of relying on engineers to translate enforcement rules into code, teams can create, test, and refine logic directly in the dashboard.
Deploy and iterate fast:
As new abuse patterns emerge, you can update rules instantly, no waiting for development sprints or app deployments. This speed is critical when addressing time-sensitive threats like raids, coordinated spam, or policy changes.
Reduce review volume:
By automating everyday enforcement actions such as flags, bans, and shadowbans, Rule Builder helps teams minimize the number of cases that require manual moderation. This reduces operational overhead while maintaining a high standard of safety.
Handle multi-modal inputs:
Rule Builder supports logic that combines multiple content types, including text, images, videos, and account history. This allows you to detect complex behavior that spans formats, like spam images from new accounts or harmful language coupled with suspicious account patterns.
Support high-scale enforcement:
Whether you're moderating a few thousand users or tens of millions, Rule Builder ensures consistent, low-latency enforcement. It's built to scale with your application without introducing performance bottlenecks or moderation blind spots.
How to Get Started
Rule Builder is available now in the Stream dashboard at no additional cost and can be enabled on request. To get access or provide feedback, contact moderation@getstream.io.