# Stream Feeds

Integrating moderation in Activity Feeds v3 is straightforward and requires minimal code changes. Once you configure a moderation policy, Stream automatically moderates activities and comments as they are created or updated. This guide walks you through setting up moderation and monitoring flagged content.

## Configure Moderation Policy

A moderation policy defines how content should be moderated. You can configure it through the [dashboard](/moderation/docs/<framework>/configuration/policies/) or via the API.

For Feeds v3, Stream uses a hierarchical config key lookup to find the right policy:

1. `feeds:{group_id}:{feed_id}` -- most specific (individual feed)
2. `feeds:{group_id}` -- feed group level
3. `feeds` -- app-wide default

To set up an app-wide policy for all feeds content:

<codetabs>

<codetabs-item value="js" label="Node">

```js
await client.moderation.upsertConfig({
  key: "feeds",
  ai_text_config: {
    rules: [
      { label: "SPAM", action: "flag" },
      { label: "HARASSMENT", action: "remove" },
    ],
  },
  block_list_config: {
    rules: [{ name: "profanity_en", action: "remove" }],
  },
  ai_image_config: {
    rules: [{ label: "Non-Explicit Nudity", action: "flag" }],
  },
});
```

</codetabs-item>

<codetabs-item value="py" label="Python">

```python
client.moderation().upsert_config(
    key="feeds",
    ai_text_config=AITextConfig(
        rules=[
            AITextRule(label="SPAM", action="flag"),
            AITextRule(label="HARASSMENT", action="remove"),
        ],
    ),
    block_list_config=BlockListConfig(
        rules=[BlockListRule(name="profanity_en", action="remove")],
    ),
    ai_image_config=AIImageConfig(
        rules=[AIImageRule(label="Non-Explicit Nudity", action="flag")],
    ),
)
```

</codetabs-item>

<codetabs-item value="go" label="Go">

```go
client.Moderation().UpsertConfig(ctx, &getstream.UpsertConfigRequest{
    Key: "feeds",
    AiTextConfig: &getstream.AITextConfig{
        Rules: []getstream.AITextRule{
            {Label: "SPAM", Action: "flag"},
            {Label: "HARASSMENT", Action: "remove"},
        },
    },
    BlockListConfig: &getstream.BlockListConfig{
        Rules: []getstream.BlockListRule{
            {Name: "profanity_en", Action: "remove"},
        },
    },
    AiImageConfig: &getstream.AIImageConfig{
        Rules: []getstream.AIImageRule{
            {Label: "Non-Explicit Nudity", Action: "flag"},
        },
    },
})
```

</codetabs-item>

<codetabs-item value="java" label="Java">

```java
client.moderation().upsertConfig(UpsertConfigRequest.builder()
    .key("feeds")
    .aiTextConfig(AITextConfig.builder()
        .rules(List.of(
            AITextRule.builder().label("SPAM").action("flag").build(),
            AITextRule.builder().label("HARASSMENT").action("remove").build()
        )).build())
    .blockListConfig(BlockListConfig.builder()
        .rules(List.of(BlockListRule.builder().name("profanity_en").action("remove").build()))
        .build())
    .aiImageConfig(AIImageConfig.builder()
        .rules(List.of(AIImageRule.builder().label("Non-Explicit Nudity").action("flag").build()))
        .build())
    .build()).execute();
```

</codetabs-item>

<codetabs-item value="php" label="PHP">

```php
$client->moderation()->upsertConfig(new UpsertConfigRequest(
    key: 'feeds',
    aiTextConfig: new AITextConfig(
        rules: [
            new AITextRule(label: 'SPAM', action: 'flag'),
            new AITextRule(label: 'HARASSMENT', action: 'remove'),
        ],
    ),
    blockListConfig: new BlockListConfig(
        rules: [new BlockListRule(name: 'profanity_en', action: 'remove')],
    ),
    aiImageConfig: new AIImageConfig(
        rules: [new AIImageRule(label: 'Non-Explicit Nudity', action: 'flag')],
    ),
));
```

</codetabs-item>

<codetabs-item value="ruby" label="Ruby">

```ruby
client.moderation.upsert_config(GetStream::Generated::Models::UpsertConfigRequest.new(
  key: "feeds",
  ai_text_config: GetStream::Generated::Models::AITextConfig.new(
    rules: [
      GetStream::Generated::Models::AITextRule.new(label: "SPAM", action: "flag"),
      GetStream::Generated::Models::AITextRule.new(label: "HARASSMENT", action: "remove"),
    ]
  ),
  block_list_config: GetStream::Generated::Models::BlockListConfig.new(
    rules: [GetStream::Generated::Models::BlockListRule.new(name: "profanity_en", action: "remove")]
  ),
  ai_image_config: GetStream::Generated::Models::AIImageConfig.new(
    rules: [GetStream::Generated::Models::AIImageRule.new(label: "Non-Explicit Nudity", action: "flag")]
  )
))
```

</codetabs-item>

<codetabs-item value="dotnet" label=".NET">

```csharp
await client.Moderation.UpsertConfigAsync(new UpsertConfigRequest
{
    Key = "feeds",
    AiTextConfig = new AITextConfig
    {
        Rules = new List<AITextRule>
        {
            new() { Label = "SPAM", Action = "flag" },
            new() { Label = "HARASSMENT", Action = "remove" },
        },
    },
    BlockListConfig = new BlockListConfig
    {
        Rules = new List<BlockListRule>
        {
            new() { Name = "profanity_en", Action = "remove" },
        },
    },
    AiImageConfig = new AIImageConfig
    {
        Rules = new List<AIImageRule>
        {
            new() { Label = "Non-Explicit Nudity", Action = "flag" },
        },
    },
});
```

</codetabs-item>

</codetabs>

You can also configure policies per feed group (e.g., `feeds:user` for user feeds, `feeds:timeline` for timeline feeds) to apply different moderation rules to different feed types.

## Auto-Moderation on Activities

Once the policy is configured, activities are automatically moderated when created or updated. No additional code is needed -- just add activities as you normally would.

When an activity contains content that violates your policy, Stream takes the configured action:

- **remove** -- the activity is removed and appears in the review queue
- **flag** -- the activity is published but flagged for review
- **bounce** -- the activity is rejected entirely and not saved
- **shadow_block** -- the activity is hidden from other users but visible to the creator

The API response includes a `moderation` object with details about the action taken:

```json
{
  "id": "activity-123",
  "actor": "user-456",
  "verb": "post",
  "object": "...",
  "moderation": {
    "status": "complete",
    "recommended_action": "remove",
    "text_harms": ["harassment"],
    "blocklists_matched": ["profanity_en"]
  }
}
```

| Field                | Type     | Description                                                          |
| -------------------- | -------- | -------------------------------------------------------------------- |
| `status`             | string   | Moderation status: `complete` or `partial` (if async checks pending) |
| `recommended_action` | string   | Action taken: `keep`, `flag`, `remove`, `bounce`, `shadow_block`     |
| `text_harms`         | string[] | Labels from AI text analysis (e.g., `["harassment", "spam"]`)        |
| `image_harms`        | string[] | Labels from image moderation (e.g., `["nudity"]`)                    |
| `blocklists_matched` | string[] | Blocklists that matched (e.g., `["profanity_en"]`)                   |

## Auto-Moderation on Comments

Comments are also automatically moderated using the same policy. When a user adds a comment, Stream checks the content and applies the configured action.

If a comment is blocked, it will not be visible to other users. The response will indicate the moderation action:

```json
{
  "id": "comment-789",
  "text": "...",
  "moderation": {
    "status": "complete",
    "recommended_action": "remove",
    "text_harms": ["toxicity"]
  }
}
```

<admonition type="info">

Reactions (likes, etc.) are not auto-moderated, only activities and comments.

</admonition>

## Monitoring Moderated Content

You can monitor all flagged or blocked content from the Stream dashboard. The dashboard provides three review queues:

### Users Queue

Contains all users who were flagged or who have flagged content. Actions available:

- **Mark Reviewed** -- acknowledge the user has been assessed
- **Permanently Ban User** -- restrict the user from the platform
- **Temporarily Ban User** -- restrict access for a specified period
- **Delete User** -- remove the user account
- **Delete All Content** -- remove all content from the user

### Text Queue

Contains flagged or blocked text content. Actions available:

- **Mark Reviewed** -- content doesn't require further action
- **Delete** -- remove the content from the platform
- **Unblock** -- restore content that was incorrectly blocked

### Media Queue

Contains flagged or blocked images and videos. Same actions as the Text Queue are available.

## User-Driven Actions

In addition to automated moderation, users can flag content or other users for review.

### Flag a User

<codetabs>

<codetabs-item value="js" label="Node">

```js
await client.moderation.flag({
  entity_type: "stream:user",
  entity_id: targetUserId,
  reason: "harassment",
  user_id: reportingUserId,
});
```

</codetabs-item>

<codetabs-item value="py" label="Python">

```python
client.moderation().flag(
    entity_type="stream:user",
    entity_id=target_user_id,
    reason="harassment",
    user_id=reporting_user_id,
)
```

</codetabs-item>

<codetabs-item value="go" label="Go">

```go
client.Moderation().Flag(ctx, &getstream.FlagRequest{
    EntityType: "stream:user",
    EntityID:   targetUserID,
    Reason:     getstream.PtrTo("harassment"),
    UserID:     getstream.PtrTo(reportingUserID),
})
```

</codetabs-item>

<codetabs-item value="java" label="Java">

```java
client.moderation().flag(FlagRequest.builder()
    .entityType("stream:user")
    .entityID(targetUserId)
    .reason("harassment")
    .userId(reportingUserId)
    .build()).execute();
```

</codetabs-item>

<codetabs-item value="php" label="PHP">

```php
$client->moderation()->flag(new FlagRequest(
    entityType: 'stream:user',
    entityID: $targetUserId,
    reason: 'harassment',
    userID: $reportingUserId,
));
```

</codetabs-item>

<codetabs-item value="ruby" label="Ruby">

```ruby
client.moderation.flag(GetStream::Generated::Models::FlagRequest.new(
  entity_type: "stream:user",
  entity_id: target_user_id,
  reason: "harassment",
  user_id: reporting_user_id
))
```

</codetabs-item>

<codetabs-item value="dotnet" label=".NET">

```csharp
await client.Moderation.FlagAsync(new FlagRequest
{
    EntityType = "stream:user",
    EntityID = targetUserId,
    Reason = "harassment",
    UserID = reportingUserId,
});
```

</codetabs-item>

</codetabs>

### Ban a User

<codetabs>

<codetabs-item value="js" label="Node">

```js
await client.moderation.ban({
  target_user_id: targetUserId,
  banned_by_id: moderatorId,
  reason: "Repeated violations",
});
```

</codetabs-item>

<codetabs-item value="py" label="Python">

```python
client.moderation().ban(
    target_user_id=target_user_id,
    banned_by_id=moderator_id,
    reason="Repeated violations",
)
```

</codetabs-item>

<codetabs-item value="go" label="Go">

```go
client.Moderation().Ban(ctx, &getstream.BanRequest{
    TargetUserID: targetUserID,
    BannedByID:   getstream.PtrTo(moderatorID),
    Reason:       getstream.PtrTo("Repeated violations"),
})
```

</codetabs-item>

<codetabs-item value="java" label="Java">

```java
client.moderation().ban(BanRequest.builder()
    .targetUserID(targetUserId)
    .bannedByID(moderatorId)
    .reason("Repeated violations")
    .build()).execute();
```

</codetabs-item>

<codetabs-item value="php" label="PHP">

```php
$client->moderation()->ban(new BanRequest(
    targetUserID: $targetUserId,
    bannedByID: $moderatorId,
    reason: 'Repeated violations',
));
```

</codetabs-item>

<codetabs-item value="ruby" label="Ruby">

```ruby
client.moderation.ban(GetStream::Generated::Models::BanRequest.new(
  target_user_id: target_user_id,
  banned_by_id: moderator_id,
  reason: "Repeated violations"
))
```

</codetabs-item>

<codetabs-item value="dotnet" label=".NET">

```csharp
await client.Moderation.BanAsync(new BanRequest
{
    TargetUserID = targetUserId,
    BannedByID = moderatorId,
    Reason = "Repeated violations",
});
```

</codetabs-item>

</codetabs>

Banned users are prevented from reading feeds and creating activities or reactions.

## Next Steps

- [Configuration](/moderation/docs/<framework>/configuration/policies/) -- Advanced policy setup and per-feed-group configs
- [Review Queue](/moderation/docs/<framework>/content-moderation/review-queue/) -- Full review queue API reference
- [Webhooks](/moderation/docs/<framework>/content-moderation/webhooks/) -- Get notified when content is moderated
- [Feeds v2 Moderation](/moderation/docs/<framework>/integrations/stream-feeds-v2/) -- Template-based moderation for Feeds v2


---

This page was last updated at 2026-04-16T18:29:40.059Z.

For the most recent version of this documentation, visit [https://getstream.io/moderation/docs/go-golang/integrations/stream-feeds/](https://getstream.io/moderation/docs/go-golang/integrations/stream-feeds/).