await client.moderation.upsertConfig({
key: "feeds",
ai_text_config: {
rules: [
{ label: "SPAM", action: "flag" },
{ label: "HARASSMENT", action: "remove" },
],
},
block_list_config: {
rules: [{ name: "profanity_en", action: "remove" }],
},
ai_image_config: {
rules: [{ label: "Non-Explicit Nudity", action: "flag" }],
},
});Stream Feeds
Integrating moderation in Activity Feeds v3 is straightforward and requires minimal code changes. Once you configure a moderation policy, Stream automatically moderates activities and comments as they are created or updated. This guide walks you through setting up moderation and monitoring flagged content.
Configure Moderation Policy
A moderation policy defines how content should be moderated. You can configure it through the dashboard or via the API.
For Feeds v3, Stream uses a hierarchical config key lookup to find the right policy:
feeds:{group_id}:{feed_id}-- most specific (individual feed)feeds:{group_id}-- feed group levelfeeds-- app-wide default
To set up an app-wide policy for all feeds content:
client.moderation.upsert_config(GetStream::Generated::Models::UpsertConfigRequest.new(
key: "feeds",
ai_text_config: GetStream::Generated::Models::AITextConfig.new(
rules: [
GetStream::Generated::Models::AITextRule.new(label: "SPAM", action: "flag"),
GetStream::Generated::Models::AITextRule.new(label: "HARASSMENT", action: "remove"),
]
),
block_list_config: GetStream::Generated::Models::BlockListConfig.new(
rules: [GetStream::Generated::Models::BlockListRule.new(name: "profanity_en", action: "remove")]
),
ai_image_config: GetStream::Generated::Models::AIImageConfig.new(
rules: [GetStream::Generated::Models::AIImageRule.new(label: "Non-Explicit Nudity", action: "flag")]
)
))You can also configure policies per feed group (e.g., feeds:user for user feeds, feeds:timeline for timeline feeds) to apply different moderation rules to different feed types.
Auto-Moderation on Activities
Once the policy is configured, activities are automatically moderated when created or updated. No additional code is needed -- just add activities as you normally would.
When an activity contains content that violates your policy, Stream takes the configured action:
- remove -- the activity is removed and appears in the review queue
- flag -- the activity is published but flagged for review
- bounce -- the activity is rejected entirely and not saved
- shadow_block -- the activity is hidden from other users but visible to the creator
The API response includes a moderation object with details about the action taken:
{
"id": "activity-123",
"actor": "user-456",
"verb": "post",
"object": "...",
"moderation": {
"status": "complete",
"recommended_action": "remove",
"text_harms": ["harassment"],
"blocklists_matched": ["profanity_en"]
}
}| Field | Type | Description |
|---|---|---|
status | string | Moderation status: complete or partial (if async checks pending) |
recommended_action | string | Action taken: keep, flag, remove, bounce, shadow_block |
text_harms | string[] | Labels from AI text analysis (e.g., ["harassment", "spam"]) |
image_harms | string[] | Labels from image moderation (e.g., ["nudity"]) |
blocklists_matched | string[] | Blocklists that matched (e.g., ["profanity_en"]) |
Auto-Moderation on Comments
Comments are also automatically moderated using the same policy. When a user adds a comment, Stream checks the content and applies the configured action.
If a comment is blocked, it will not be visible to other users. The response will indicate the moderation action:
{
"id": "comment-789",
"text": "...",
"moderation": {
"status": "complete",
"recommended_action": "remove",
"text_harms": ["toxicity"]
}
}Reactions (likes, etc.) are not auto-moderated, only activities and comments.
Monitoring Moderated Content
You can monitor all flagged or blocked content from the Stream dashboard. The dashboard provides three review queues:
Users Queue
Contains all users who were flagged or who have flagged content. Actions available:
- Mark Reviewed -- acknowledge the user has been assessed
- Permanently Ban User -- restrict the user from the platform
- Temporarily Ban User -- restrict access for a specified period
- Delete User -- remove the user account
- Delete All Content -- remove all content from the user
Text Queue
Contains flagged or blocked text content. Actions available:
- Mark Reviewed -- content doesn't require further action
- Delete -- remove the content from the platform
- Unblock -- restore content that was incorrectly blocked
Media Queue
Contains flagged or blocked images and videos. Same actions as the Text Queue are available.
User-Driven Actions
In addition to automated moderation, users can flag content or other users for review.
Flag a User
client.moderation.flag(GetStream::Generated::Models::FlagRequest.new(
entity_type: "stream:user",
entity_id: target_user_id,
reason: "harassment",
user_id: reporting_user_id
))Ban a User
client.moderation.ban(GetStream::Generated::Models::BanRequest.new(
target_user_id: target_user_id,
banned_by_id: moderator_id,
reason: "Repeated violations"
))Banned users are prevented from reading feeds and creating activities or reactions.
Next Steps
- Configuration -- Advanced policy setup and per-feed-group configs
- Review Queue -- Full review queue API reference
- Webhooks -- Get notified when content is moderated
- Feeds v2 Moderation -- Template-based moderation for Feeds v2