# Policies

## Overview

Moderation configurations (also called policies) define how content is moderated within your application. Each configuration specifies which moderation engines are active, what rules apply to detected content categories, and what actions to take when rules are triggered. Configurations are identified by a unique key and can be scoped to specific teams for multi-tenancy support.

You can combine multiple moderation engines in a single configuration -- for example, AI text analysis for detecting harassment, blocklists for filtering profanity, and image moderation for detecting nudity. Each engine has its own set of rules that map detected labels or patterns to actions such as flagging, removing, or shadow-blocking content.

## Upsert Config

Creates or updates a moderation configuration. If a configuration with the specified key already exists, it will be replaced. A configuration can include any combination of moderation engines: AI text analysis, AI image moderation, blocklist filtering, toxicity detection, platform circumvention detection, velocity filters, and video call rules.

<codetabs>

<codetabs-item value="js" label="Node">

```js
await client.moderation.upsertConfig({
  key: "my_config",
  ai_text_config: {
    rules: [
      { label: "SPAM", action: "flag" },
      {
        label: "HARASSMENT",
        severity_rules: [
          { severity: "low", action: "flag" },
          { severity: "high", action: "remove" },
        ],
      },
    ],
  },
  block_list_config: {
    rules: [{ name: "profanity_en", action: "remove" }],
  },
  ai_image_config: {
    rules: [{ label: "Non-Explicit Nudity", action: "flag" }],
  },
});
```

</codetabs-item>

<codetabs-item value="py" label="Python">

```python
client.moderation().upsert_config(
    key="my_config",
    ai_text_config=AITextConfig(
        rules=[
            AITextRule(label="SPAM", action="flag"),
            AITextRule(label="HARASSMENT", severity_rules=[
                SeverityRule(severity="low", action="flag"),
                SeverityRule(severity="high", action="remove"),
            ]),
        ],
    ),
    block_list_config=BlockListConfig(
        rules=[BlockListRule(name="profanity_en", action="remove")],
    ),
    ai_image_config=AIImageConfig(
        rules=[AIImageRule(label="Non-Explicit Nudity", action="flag")],
    ),
)
```

</codetabs-item>

<codetabs-item value="go" label="Go">

```go
client.Moderation().UpsertConfig(ctx, &getstream.UpsertConfigRequest{
    Key: "my_config",
    AiTextConfig: &getstream.AITextConfig{
        Rules: []getstream.AITextRule{
            {Label: "SPAM", Action: "flag"},
            {Label: "HARASSMENT", SeverityRules: []getstream.SeverityRule{
                {Severity: "low", Action: "flag"},
                {Severity: "high", Action: "remove"},
            }},
        },
    },
    BlockListConfig: &getstream.BlockListConfig{
        Rules: []getstream.BlockListRule{
            {Name: "profanity_en", Action: "remove"},
        },
    },
    AiImageConfig: &getstream.AIImageConfig{
        Rules: []getstream.AIImageRule{
            {Label: "Non-Explicit Nudity", Action: "flag"},
        },
    },
})
```

</codetabs-item>

<codetabs-item value="java" label="Java">

```java
client.moderation()
    .upsertConfig(UpsertConfigRequest.builder()
        .key("my_config")
        .aiTextConfig(AITextConfig.builder()
            .rules(List.of(
                AITextRule.builder().label("SPAM").action("flag").build(),
                AITextRule.builder().label("HARASSMENT").severityRules(List.of(
                    SeverityRule.builder().severity("low").action("flag").build(),
                    SeverityRule.builder().severity("high").action("remove").build()
                )).build()
            )).build())
        .blockListConfig(BlockListConfig.builder()
            .rules(List.of(BlockListRule.builder().name("profanity_en").action("remove").build()))
            .build())
        .aiImageConfig(AIImageConfig.builder()
            .rules(List.of(AIImageRule.builder().label("Non-Explicit Nudity").action("flag").build()))
            .build())
        .build())
    .execute();
```

</codetabs-item>

<codetabs-item value="php" label="PHP">

```php
$client->moderation()->upsertConfig(new UpsertConfigRequest(
    key: 'my_config',
    aiTextConfig: new AITextConfig(
        rules: [
            new AITextRule(label: 'SPAM', action: 'flag'),
            new AITextRule(label: 'HARASSMENT', severityRules: [
                new SeverityRule(severity: 'low', action: 'flag'),
                new SeverityRule(severity: 'high', action: 'remove'),
            ]),
        ],
    ),
    blockListConfig: new BlockListConfig(
        rules: [new BlockListRule(name: 'profanity_en', action: 'remove')],
    ),
    aiImageConfig: new AIImageConfig(
        rules: [new AIImageRule(label: 'Non-Explicit Nudity', action: 'flag')],
    ),
));
```

</codetabs-item>

<codetabs-item value="ruby" label="Ruby">

```ruby
client.moderation.upsert_config(
  GetStream::Generated::Models::UpsertConfigRequest.new(
    key: "my_config",
    ai_text_config: GetStream::Generated::Models::AITextConfig.new(
      rules: [
        GetStream::Generated::Models::AITextRule.new(label: "SPAM", action: "flag"),
        GetStream::Generated::Models::AITextRule.new(label: "HARASSMENT", severity_rules: [
          GetStream::Generated::Models::SeverityRule.new(severity: "low", action: "flag"),
          GetStream::Generated::Models::SeverityRule.new(severity: "high", action: "remove"),
        ]),
      ],
    ),
    block_list_config: GetStream::Generated::Models::BlockListConfig.new(
      rules: [GetStream::Generated::Models::BlockListRule.new(name: "profanity_en", action: "remove")],
    ),
    ai_image_config: GetStream::Generated::Models::AIImageConfig.new(
      rules: [GetStream::Generated::Models::AIImageRule.new(label: "Non-Explicit Nudity", action: "flag")],
    ),
  )
)
```

</codetabs-item>

<codetabs-item value="dotnet" label=".NET">

```csharp
await client.Moderation.UpsertConfigAsync(new UpsertConfigRequest
{
    Key = "my_config",
    AiTextConfig = new AITextConfig
    {
        Rules = new List<AITextRule>
        {
            new() { Label = "SPAM", Action = "flag" },
            new() { Label = "HARASSMENT", SeverityRules = new List<SeverityRule>
            {
                new() { Severity = "low", Action = "flag" },
                new() { Severity = "high", Action = "remove" },
            }},
        },
    },
    BlockListConfig = new BlockListConfig
    {
        Rules = new List<BlockListRule>
        {
            new() { Name = "profanity_en", Action = "remove" },
        },
    },
    AiImageConfig = new AIImageConfig
    {
        Rules = new List<AIImageRule>
        {
            new() { Label = "Non-Explicit Nudity", Action = "flag" },
        },
    },
});
```

</codetabs-item>

</codetabs>

### Request Parameters

| key                                   | required | type    | description                                                                                    |
| ------------------------------------- | -------- | ------- | ---------------------------------------------------------------------------------------------- |
| key                                   | true     | string  | Unique identifier for the config. Used when sending content for moderation check.              |
| team                                  | false    | string  | Team identifier for multi-tenancy. Scopes the config to a specific team.                       |
| async                                 | false    | boolean | When true, moderation checks using this config run asynchronously.                             |
| ai_text_config                        | false    | object  | Configuration for AI text analysis. Define rules mapping harm labels to actions.               |
| ai_image_config                       | false    | object  | Configuration for AI image moderation (e.g., nudity, violence detection).                      |
| ai_video_config                       | false    | object  | Configuration for AI video moderation.                                                         |
| block_list_config                     | false    | object  | Configuration for blocklist-based filtering. Define which blocklists to use and their actions. |
| automod_toxicity_config               | false    | object  | Configuration for Stream's built-in toxicity detection engine.                                 |
| automod_platform_circumvention_config | false    | object  | Configuration for platform circumvention detection (phone numbers, external links, etc.).      |
| velocity_filter_config                | false    | object  | Configuration for velocity-based filtering (rate limiting repeated content).                   |
| llm_config                            | false    | object  | Configuration for LLM-based moderation.                                                        |

### Config Key Conventions

The config key follows a hierarchical naming convention that determines its scope:

- **Chat channel**: `chat:messaging:channel_id` -- applies to a specific channel
- **Chat channel type**: `chat:messaging` -- applies to all channels of a given type
- **All chat channels**: `chat` -- applies to all chat channels
- **Activity Feeds**: `feeds:default` -- applies to activity feeds

When content is submitted for moderation, the system looks for the most specific config key first, then falls back to broader scopes.

## Get Config

Retrieve a specific moderation configuration by its key. Returns the full configuration object including all engine settings and rules.

<codetabs>

<codetabs-item value="js" label="Node">

```js
await client.moderation.getConfig({ key: "my_config" });
```

</codetabs-item>

<codetabs-item value="py" label="Python">

```python
client.moderation().get_config("my_config")
```

</codetabs-item>

<codetabs-item value="go" label="Go">

```go
client.Moderation().GetConfig(ctx, "my_config", &getstream.GetConfigRequest{})
```

</codetabs-item>

<codetabs-item value="java" label="Java">

```java
client.moderation().getConfig("my_config").execute();
```

</codetabs-item>

<codetabs-item value="php" label="PHP">

```php
$client->moderation()->getConfig('my_config');
```

</codetabs-item>

<codetabs-item value="ruby" label="Ruby">

```ruby
client.moderation.get_config("my_config")
```

</codetabs-item>

<codetabs-item value="dotnet" label=".NET">

```csharp
await client.Moderation.GetConfigAsync("my_config");
```

</codetabs-item>

</codetabs>

### Request Parameters

| key  | required | type   | description                                      |
| ---- | -------- | ------ | ------------------------------------------------ |
| key  | true     | string | The unique identifier of the config to retrieve. |
| team | false    | string | Team identifier for multi-tenancy.               |

### Response

| key    | type   | description                               |
| ------ | ------ | ----------------------------------------- |
| config | object | The full moderation configuration object. |

## Delete Config

Delete a moderation configuration. Once deleted, any content moderation checks referencing this config key will no longer apply.

<codetabs>

<codetabs-item value="js" label="Node">

```js
await client.moderation.deleteConfig({ key: "my_config" });
```

</codetabs-item>

<codetabs-item value="py" label="Python">

```python
client.moderation().delete_config("my_config")
```

</codetabs-item>

<codetabs-item value="go" label="Go">

```go
client.Moderation().DeleteConfig(ctx, "my_config", &getstream.DeleteConfigRequest{})
```

</codetabs-item>

<codetabs-item value="java" label="Java">

```java
client.moderation().deleteConfig("my_config").execute();
```

</codetabs-item>

<codetabs-item value="php" label="PHP">

```php
$client->moderation()->deleteConfig('my_config');
```

</codetabs-item>

<codetabs-item value="ruby" label="Ruby">

```ruby
client.moderation.delete_config("my_config")
```

</codetabs-item>

<codetabs-item value="dotnet" label=".NET">

```csharp
await client.Moderation.DeleteConfigAsync("my_config");
```

</codetabs-item>

</codetabs>

### Request Parameters

| key  | required | type   | description                                    |
| ---- | -------- | ------ | ---------------------------------------------- |
| key  | true     | string | The unique identifier of the config to delete. |
| team | false    | string | Team identifier for multi-tenancy.             |

## Query Configs

Search and filter moderation configurations with support for sorting and pagination. Use this endpoint to list all configurations or find specific ones matching filter criteria.

<codetabs>

<codetabs-item value="js" label="Node">

```js
await client.moderation.queryModerationConfigs({
  filter: {},
  sort: [{ field: "created_at", direction: -1 }],
  limit: 10,
});
```

</codetabs-item>

<codetabs-item value="py" label="Python">

```python
client.moderation().query_moderation_configs(
    filter={},
    sort=[{"field": "created_at", "direction": -1}],
    limit=10,
)
```

</codetabs-item>

<codetabs-item value="go" label="Go">

```go
client.Moderation().QueryModerationConfigs(ctx, &getstream.QueryModerationConfigsRequest{
    Filter: map[string]interface{}{},
    Sort: []getstream.SortParam{
        {Field: "created_at", Direction: getstream.PtrTo(-1)},
    },
    Limit: getstream.PtrTo(10),
})
```

</codetabs-item>

<codetabs-item value="java" label="Java">

```java
client.moderation().queryModerationConfigs(QueryModerationConfigsRequest.builder()
    .filter(Map.of())
    .sort(List.of(SortParam.builder().field("created_at").direction(-1).build()))
    .limit(10)
    .build()).execute();
```

</codetabs-item>

<codetabs-item value="php" label="PHP">

```php
$client->moderation()->queryModerationConfigs(new QueryModerationConfigsRequest(
    filter: (object)[],
    sort: [new SortParam(field: 'created_at', direction: -1)],
    limit: 10,
));
```

</codetabs-item>

<codetabs-item value="ruby" label="Ruby">

```ruby
client.moderation.query_moderation_configs(GetStream::Generated::Models::QueryModerationConfigsRequest.new(
  filter: {},
  sort: [GetStream::Generated::Models::SortParam.new(field: "created_at", direction: -1)],
  limit: 10
))
```

</codetabs-item>

<codetabs-item value="dotnet" label=".NET">

```csharp
await client.Moderation.QueryModerationConfigsAsync(new QueryModerationConfigsRequest
{
    Filter = new Dictionary<string, object>(),
    Sort = new List<SortParam>
    {
        new() { Field = "created_at", Direction = -1 },
    },
    Limit = 10,
});
```

</codetabs-item>

</codetabs>

### Request Parameters

| key    | required | type   | description                            |
| ------ | -------- | ------ | -------------------------------------- |
| filter | false    | object | Filter conditions for configs.         |
| sort   | false    | array  | Sort parameters (e.g., by created_at). |
| limit  | false    | number | Maximum number of configs to return.   |
| next   | false    | string | Cursor for pagination.                 |

### Response

| key     | type   | description                        |
| ------- | ------ | ---------------------------------- |
| configs | array  | List of moderation config objects. |
| next    | string | Next cursor for pagination.        |

## Config Structure Reference

Each moderation configuration can include one or more engine-specific sub-configurations. The following table summarizes the available config types and where to find detailed documentation for each engine.

| Config Field                          | Description                                                         | Engine Documentation                                                                |
| ------------------------------------- | ------------------------------------------------------------------- | ----------------------------------------------------------------------------------- |
| ai_text_config                        | AI-powered text analysis for detecting harm categories              | [AI Text](/moderation/docs/<framework>/engines/ai-text/)                            |
| ai_image_config                       | AI-powered image moderation for nudity, violence, and more          | [Image Moderation](/moderation/docs/<framework>/engines/image-moderation/)          |
| block_list_config                     | Blocklist and regex-based filtering for known bad words or patterns | [Blocklists and Regex Filters](/moderation/docs/<framework>/configuration/filters/) |
| automod_toxicity_config               | Stream's built-in toxicity scoring engine                           | --                                                                                  |
| automod_platform_circumvention_config | Detection of phone numbers, emails, and external links              | --                                                                                  |
| velocity_filter_config                | Rate limiting for repeated or high-volume content                   | --                                                                                  |
| ai_video_config                       | AI-powered video call moderation                                    | --                                                                                  |
| llm_config                            | LLM-based moderation for custom policies                            | --                                                                                  |

### Rule Actions

Each rule within a config maps a detected label or pattern to an action. The following actions are available:

| Action         | Description                                                                                                      |
| -------------- | ---------------------------------------------------------------------------------------------------------------- |
| `flag`         | Flag the content for human review. The content remains visible but appears in the moderation review queue.       |
| `remove`       | Remove the content immediately. The content is deleted and a review queue item is created.                       |
| `shadow_block` | Shadow-block the content. The content appears visible to the author but is hidden from other users.              |
| `bounce`       | Bounce the content back to the sender. The content is rejected before it is published, and the user is notified. |

### Severity Levels

For AI text analysis rules, you can define severity-based rules instead of a single action. This allows different actions depending on how severe the detected violation is. The available severity levels are:

| Severity   | Description                                                                |
| ---------- | -------------------------------------------------------------------------- |
| `low`      | Minor or borderline violations. Typically used for flagging or monitoring. |
| `medium`   | Moderate violations that may warrant content removal or closer review.     |
| `high`     | Serious violations that typically result in content removal.               |
| `critical` | The most severe violations requiring immediate action.                     |

When using severity rules, each severity level can be mapped to a different action. For example, you might flag low-severity harassment but automatically remove high-severity harassment, as shown in the Upsert Config example above.


---

This page was last updated at 2026-04-16T18:29:44.414Z.

For the most recent version of this documentation, visit [https://getstream.io/moderation/docs/ruby/configuration/policies/](https://getstream.io/moderation/docs/ruby/configuration/policies/).