# Check Content

## Overview

Moderation APIs allow you to integrate moderation capabilities into your product or application. You can use a combination of these APIs along with Stream's Moderation Dashboard to prevent app users from posting harmful content on your platform. Additionally, you can build a custom dashboard using these APIs.

Let's examine each of the moderation APIs provided by Stream.

## Prerequisites

- Make sure you have created the app on Stream. If not, please refer to [Guide to Getting Started with Stream](https://getstream.io/blog/stream-getting-started-guide/)
- The following API documentation presents examples from Stream's server-side SDKs. See the SDK setup guides for [Node](https://github.com/GetStream/stream-node), [Python](https://github.com/GetStream/stream-py), [Go](https://github.com/GetStream/getstream-go), [Java](https://github.com/GetStream/stream-sdk-java), [PHP](https://github.com/GetStream/getstream-php), [Ruby](https://github.com/GetStream/getstream-ruby), or [.NET](https://github.com/GetStream/getstream-net).

## Moderation Check Endpoint

The Moderation Check Endpoint enables real-time content moderation by analyzing submitted content and providing actionable recommendations. This comprehensive API supports multiple content formats, including text messages, images, and video content.

The process follows these simple steps:

1. Set up your moderation policy with defined rules and actions (see [Policy Setup](/moderation/docs/<framework>/configuration/policies/))
2. Submit content for review along with your policy key
3. The API analyzes the content using configured moderation engines and rules
4. Receive an immediate recommendation based on the analysis

By integrating this endpoint into your application workflow, you can efficiently maintain content quality and ensure compliance with your platform's standards while providing a safe environment for your users.

### API Usage

<admonition type="info">

In the context of the Moderation API, the term "entity" refers to a piece of content that you send for moderation. This can be text, an image, and/or a video.

</admonition>

<codetabs>

<codetabs-item value="js" label="Node">

```js
const response = await client.moderation.check({
  entity_type: "entity_type",
  entity_id: "entity_id",
  entity_creator_id: "entity_creator_id",
  moderation_payload: {
    texts: ["this is bullshit", "f*ck you as***le"],
    images: ["example.com/test.jpg"],
  },
  config_key: "config_key",
  options: { force_sync: true },
});
```

</codetabs-item>

<codetabs-item value="py" label="Python">

```python
response = client.moderation().check(
    entity_type="entity_type",
    entity_id="entity_id",
    entity_creator_id="entity_creator_id",
    moderation_payload=ModerationPayload(
        texts=["this is bullshit", "f*ck you as***le"],
        images=["example.com/test.jpg"],
    ),
    config_key="config_key",
    options={"force_sync": True},
)
```

</codetabs-item>

<codetabs-item value="go" label="Go">

```go
response, err := client.Moderation().Check(ctx, &getstream.CheckRequest{
    EntityType:      "entity_type",
    EntityID:        "entity_id",
    EntityCreatorID: "entity_creator_id",
    ModerationPayload: &getstream.ModerationPayload{
        Texts:  []string{"this is bullshit", "f*ck you as***le"},
        Images: []string{"example.com/test.jpg"},
    },
    ConfigKey: getstream.PtrTo("config_key"),
    Options:   map[string]any{"force_sync": true},
})
```

</codetabs-item>

<codetabs-item value="java" label="Java">

```java
CheckResponse response = client.moderation()
    .check(CheckRequest.builder()
        .entityType("entity_type")
        .entityID("entity_id")
        .entityCreatorID("entity_creator_id")
        .moderationPayload(ModerationPayload.builder()
            .texts(List.of("this is bullshit", "f*ck you as***le"))
            .images(List.of("example.com/test.jpg"))
            .build())
        .configKey("config_key")
        .options(Map.of("force_sync", true))
        .build())
    .execute()
    .getData();
```

</codetabs-item>

<codetabs-item value="php" label="PHP">

```php
$response = $client->moderation()->check(new CheckRequest(
    entityCreatorID: 'entity_creator_id',
    entityID: 'entity_id',
    entityType: 'entity_type',
    moderationPayload: new ModerationPayload(
        texts: ['this is bullshit', 'f*ck you as***le'],
        images: ['example.com/test.jpg'],
    ),
    configKey: 'config_key',
    options: (object)['force_sync' => true],
));
```

</codetabs-item>

<codetabs-item value="ruby" label="Ruby">

```ruby
response = client.moderation.check(
  GetStream::Generated::Models::CheckRequest.new(
    entity_type: "entity_type",
    entity_id: "entity_id",
    entity_creator_id: "entity_creator_id",
    moderation_payload: GetStream::Generated::Models::ModerationPayload.new(
      texts: ["this is bullshit", "f*ck you as***le"],
      images: ["example.com/test.jpg"],
    ),
    config_key: "config_key",
    options: { force_sync: true },
  )
)
```

</codetabs-item>

<codetabs-item value="dotnet" label=".NET">

```csharp
var response = await client.Moderation.CheckAsync(new CheckRequest
{
    EntityType = "entity_type",
    EntityID = "entity_id",
    EntityCreatorID = "entity_creator_id",
    ModerationPayload = new ModerationPayload
    {
        Texts = new List<string> { "this is bullshit", "f*ck you as***le" },
        Images = new List<string> { "example.com/test.jpg" },
    },
    ConfigKey = "config_key",
    Options = new Dictionary<string, object> { { "force_sync", true } },
});
```

</codetabs-item>

</codetabs>

### Request Params

| key                | required | type   | description                                                                                                                                                                                                                                                         |
| ------------------ | -------- | ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| entity_type        | true     | string | This is identifier for the type of content you are sending for moderation. This helps with categorizing content on dashboard. E.g., if you have multiple products then you can set unique entity_type for content coming from each product. It could be any string. |
| entity_id          | true     | string | Unique identifier for entity                                                                                                                                                                                                                                        |
| entity_creator_id  | true     | string | Unique identifier for user who created this entity. Generally this is the user id of the app user.                                                                                                                                                                  |
| moderation_payload | true     | object | Entity or content to be moderated                                                                                                                                                                                                                                   |
| config_key         | true     | string | Key of the config to be used for moderation. See [Configuration](/moderation/docs/<framework>/configuration/policies/) for details.                                                                                                                                 |

### Response

| key                | type                       | description                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
| ------------------ | -------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| status             | “complete” , “partial”     | Status of moderation. In case you have configured both synchronous and asynchronous (image moderation) moderations, then status will be “partial”. This is because async moderations are still running in background task.                                                                                                                                                                                                                                              |
| task_id            | string                     | Id of the task running the async moderation. You can check the status of the task itself using [GetTask](https://getstream.github.io/protocol/?urls.primaryName=Chat#/product%3Achat/GetTask) endpoint.                                                                                                                                                                                                                                                                 |
| recommended_action | “flag” , “remove” , “keep” | Final result of moderation which suggest what action should be taken for moderated entity. <ul><li>“flag” suggests that the content needs manual review either on Stream dashboard or custom dashboard you may have</li> <li>“remove” suggests that this content should be removed from the platform. You will be able to access/review this content from Stream dashboard </li> <li>“keep” suggests that this content is safe and doesn’t contain any harms</li> </ul> |
| item               | object                     | This is basically a json representation of the review queue item accessible on Stream dashboard. You can use this to compose your own dashboard UI.                                                                                                                                                                                                                                                                                                                     |

## Related APIs

The moderation API is organized across several pages. After checking content, you will typically need to:

- [Configuration](/moderation/docs/<framework>/configuration/policies/) — Create and manage moderation policies (configs) that define which engines and rules to use.
- [Flag, Mute & Ban](/moderation/docs/<framework>/content-moderation/flag-mute-ban/) — Flag content, mute users, ban users, and block users.
- [Review Queue & Actions](/moderation/docs/<framework>/content-moderation/review-queue/) — Query the review queue and take actions on flagged content (delete, ban, restore, etc.).
- [Moderation Rules](/moderation/docs/<framework>/configuration/rules/) — Define automated rules that trigger actions based on moderation results.
- [Appeals](/moderation/docs/<framework>/content-moderation/appeals/) — Allow users to appeal moderation decisions.
- [Logs & Analytics](/moderation/docs/<framework>/content-moderation/logs-and-analytics/) — Query moderation logs, export data, and view analytics.

## Custom Check

Custom Check enables you to submit your own AI moderation results of a chat message/custom uploaded content for review by moderators on the Dashboard.

### Add Custom Flags to a Chat Message

Custom Flags can be added to a Chat message. This enables you to submit your own AI moderation results of a chat message for review by moderators on the Dashboard.

<codetabs>

<codetabs-item value="js" label="Node">

```js
await client.moderation.customCheck({
  entity_type: "stream:chat:v1:message",
  entity_id: messageId,
  entity_creator_id: "user_id",
  flags: [
    {
      type: "custom_check_image",
      reason: "Image was NSFW",
      labels: ["NSFW"],
    },
    {
      type: "custom_check_text",
      reason: "Text was harmful",
      labels: ["harmful"],
    },
    {
      type: "custom_check_video",
      reason: "Video contains copyright material",
      labels: ["copyright-violation"],
    },
  ],
});
```

</codetabs-item>

<codetabs-item value="py" label="Python">

```python
client.moderation().custom_check(
    entity_type="stream:chat:v1:message",
    entity_id=message_id,
    entity_creator_id="user_id",
    flags=[
        CustomCheckFlag(type="custom_check_image", reason="Image was NSFW", labels=["NSFW"]),
        CustomCheckFlag(type="custom_check_text", reason="Text was harmful", labels=["harmful"]),
        CustomCheckFlag(type="custom_check_video", reason="Video contains copyright material", labels=["copyright-violation"]),
    ],
)
```

</codetabs-item>

<codetabs-item value="go" label="Go">

```go
response, err := client.Moderation().CustomCheck(ctx, &getstream.CustomCheckRequest{
    EntityType:      "stream:chat:v1:message",
    EntityID:        messageID,
    EntityCreatorID: "user_id",
    Flags: []getstream.CustomCheckFlag{
        {Type: "custom_check_image", Reason: getstream.PtrTo("Image was NSFW"), Labels: []string{"NSFW"}},
        {Type: "custom_check_text", Reason: getstream.PtrTo("Text was harmful"), Labels: []string{"harmful"}},
        {Type: "custom_check_video", Reason: getstream.PtrTo("Video contains copyright material"), Labels: []string{"copyright-violation"}},
    },
})
```

</codetabs-item>

<codetabs-item value="java" label="Java">

```java
client.moderation()
    .customCheck(CustomCheckRequest.builder()
        .entityType("stream:chat:v1:message")
        .entityID(messageId)
        .entityCreatorID("user_id")
        .flags(List.of(
            CustomCheckFlag.builder().type("custom_check_image").reason("Image was NSFW").labels(List.of("NSFW")).build(),
            CustomCheckFlag.builder().type("custom_check_text").reason("Text was harmful").labels(List.of("harmful")).build(),
            CustomCheckFlag.builder().type("custom_check_video").reason("Video contains copyright material").labels(List.of("copyright-violation")).build()
        ))
        .build())
    .execute();
```

</codetabs-item>

<codetabs-item value="php" label="PHP">

```php
$client->moderation()->customCheck(new CustomCheckRequest(
    entityType: 'stream:chat:v1:message',
    entityID: $messageId,
    entityCreatorID: 'user_id',
    flags: [
        new CustomCheckFlag(type: 'custom_check_image', reason: 'Image was NSFW', labels: ['NSFW']),
        new CustomCheckFlag(type: 'custom_check_text', reason: 'Text was harmful', labels: ['harmful']),
        new CustomCheckFlag(type: 'custom_check_video', reason: 'Video contains copyright material', labels: ['copyright-violation']),
    ],
));
```

</codetabs-item>

<codetabs-item value="ruby" label="Ruby">

```ruby
client.moderation.custom_check(
  GetStream::Generated::Models::CustomCheckRequest.new(
    entity_type: "stream:chat:v1:message",
    entity_id: message_id,
    entity_creator_id: "user_id",
    flags: [
      GetStream::Generated::Models::CustomCheckFlag.new(type: "custom_check_image", reason: "Image was NSFW", labels: ["NSFW"]),
      GetStream::Generated::Models::CustomCheckFlag.new(type: "custom_check_text", reason: "Text was harmful", labels: ["harmful"]),
      GetStream::Generated::Models::CustomCheckFlag.new(type: "custom_check_video", reason: "Video contains copyright material", labels: ["copyright-violation"]),
    ],
  )
)
```

</codetabs-item>

<codetabs-item value="dotnet" label=".NET">

```csharp
await client.Moderation.CustomCheckAsync(new CustomCheckRequest
{
    EntityType = "stream:chat:v1:message",
    EntityID = messageId,
    EntityCreatorID = "user_id",
    Flags = new List<CustomCheckFlag>
    {
        new() { Type = "custom_check_image", Reason = "Image was NSFW", Labels = new List<string> { "NSFW" } },
        new() { Type = "custom_check_text", Reason = "Text was harmful", Labels = new List<string> { "harmful" } },
        new() { Type = "custom_check_video", Reason = "Video contains copyright material", Labels = new List<string> { "copyright-violation" } },
    },
});
```

</codetabs-item>

</codetabs>

### Add Custom Check Flags to the Custom Content

The Custom Check endpoint enables you to submit your own AI moderation results for the custom content for review by moderators on the Dashboard.

<codetabs>

<codetabs-item value="js" label="Node">

```js
await client.moderation.customCheck({
  entity_type: "entity_type",
  entity_id: "entity_id",
  entity_creator_id: "entity_creator_id",
  moderation_payload: {
    texts: ["offensive"],
  },
  flags: [
    {
      type: "custom_check_text",
      reason: "Text was offensive",
      custom: {
        explaination:
          "custom is a nullable field allowing you to attach a custom object to the flag for your specific use cases or requirements",
      },
    },
  ],
});
```

</codetabs-item>

<codetabs-item value="py" label="Python">

```python
client.moderation().custom_check(
    entity_type="entity_type",
    entity_id="entity_id",
    entity_creator_id="entity_creator_id",
    moderation_payload=ModerationPayload(texts=["offensive"]),
    flags=[
        CustomCheckFlag(
            type="custom_check_text",
            reason="Text was offensive",
            custom={"explaination": "custom is a nullable field allowing you to attach a custom object to the flag for your specific use cases or requirements"},
        ),
    ],
)
```

</codetabs-item>

<codetabs-item value="go" label="Go">

```go
response, err := client.Moderation().CustomCheck(ctx, &getstream.CustomCheckRequest{
    EntityType:      "entity_type",
    EntityID:        "entity_id",
    EntityCreatorID: "entity_creator_id",
    ModerationPayload: &getstream.ModerationPayload{
        Texts: []string{"offensive"},
    },
    Flags: []getstream.CustomCheckFlag{
        {
            Type:   "custom_check_text",
            Reason: getstream.PtrTo("Text was offensive"),
            Custom: map[string]any{"explaination": "custom is a nullable field allowing you to attach a custom object to the flag for your specific use cases or requirements"},
        },
    },
})
```

</codetabs-item>

<codetabs-item value="java" label="Java">

```java
client.moderation()
    .customCheck(CustomCheckRequest.builder()
        .entityType("entity_type")
        .entityID("entity_id")
        .entityCreatorID("entity_creator_id")
        .moderationPayload(ModerationPayload.builder()
            .texts(List.of("offensive"))
            .build())
        .flags(List.of(
            CustomCheckFlag.builder()
                .type("custom_check_text")
                .reason("Text was offensive")
                .custom(Map.of("explaination", "custom is a nullable field allowing you to attach a custom object to the flag for your specific use cases or requirements"))
                .build()
        ))
        .build())
    .execute();
```

</codetabs-item>

<codetabs-item value="php" label="PHP">

```php
$client->moderation()->customCheck(new CustomCheckRequest(
    entityType: 'entity_type',
    entityID: 'entity_id',
    entityCreatorID: 'entity_creator_id',
    moderationPayload: new ModerationPayload(texts: ['offensive']),
    flags: [
        new CustomCheckFlag(
            type: 'custom_check_text',
            reason: 'Text was offensive',
            custom: (object)['explaination' => 'custom is a nullable field allowing you to attach a custom object to the flag for your specific use cases or requirements'],
        ),
    ],
));
```

</codetabs-item>

<codetabs-item value="ruby" label="Ruby">

```ruby
client.moderation.custom_check(
  GetStream::Generated::Models::CustomCheckRequest.new(
    entity_type: "entity_type",
    entity_id: "entity_id",
    entity_creator_id: "entity_creator_id",
    moderation_payload: GetStream::Generated::Models::ModerationPayload.new(
      texts: ["offensive"],
    ),
    flags: [
      GetStream::Generated::Models::CustomCheckFlag.new(
        type: "custom_check_text",
        reason: "Text was offensive",
        custom: { explaination: "custom is a nullable field allowing you to attach a custom object to the flag for your specific use cases or requirements" },
      ),
    ],
  )
)
```

</codetabs-item>

<codetabs-item value="dotnet" label=".NET">

```csharp
await client.Moderation.CustomCheckAsync(new CustomCheckRequest
{
    EntityType = "entity_type",
    EntityID = "entity_id",
    EntityCreatorID = "entity_creator_id",
    ModerationPayload = new ModerationPayload
    {
        Texts = new List<string> { "offensive" },
    },
    Flags = new List<CustomCheckFlag>
    {
        new()
        {
            Type = "custom_check_text",
            Reason = "Text was offensive",
            Custom = new Dictionary<string, object> { { "explaination", "custom is a nullable field allowing you to attach a custom object to the flag for your specific use cases or requirements" } },
        },
    },
});
```

</codetabs-item>

</codetabs>

### Request Params

| key                | required | type                                                            | description                                                                                        |
| ------------------ | -------- | --------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- |
| entity_type        | true     | string                                                          | This is identifier for the type of the entity. This helps with categorizing Content.               |
| entity_id          | true     | string                                                          | Unique identifier for entity.                                                                      |
| entity_creator_id  | true     | string                                                          | Unique identifier for user who created this entity. Generally this is the user id of the app user. |
| moderation_payload | true     | object                                                          | Entity or content to be moderated                                                                  |
| flags              | true     | array                                                           | List of custom flags                                                                               |
| flag.type          | true     | "custom_check_text", "custom_check_image", "custom_check_video" | Type of a custom flag                                                                              |

### Response

| key    | type                   | description                                                                                                                                                                                                                |
| ------ | ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| status | “complete” , “partial” | Status of moderation. In case you have configured both synchronous and asynchronous (image moderation) moderations, then status will be “partial”. This is because async moderations are still running in background task. |
| id     | string                 | id of the review queue item.                                                                                                                                                                                               |
| item   | object                 | This is basically a json representation of the review queue item accessible on Stream dashboard. You can use this to compose your own dashboard UI.                                                                        |


---

This page was last updated at 2026-04-16T18:29:44.197Z.

For the most recent version of this documentation, visit [https://getstream.io/moderation/docs/go-golang/content-moderation/check/](https://getstream.io/moderation/docs/go-golang/content-moderation/check/).