const response = await client.moderation.check({
entity_type: "entity_type",
entity_id: "entity_id",
entity_creator_id: "entity_creator_id",
moderation_payload: {
texts: ["this is bullshit", "f*ck you as***le"],
images: ["example.com/test.jpg"],
},
config_key: "config_key",
options: { force_sync: true },
});Check Content
Overview
Moderation APIs allow you to integrate moderation capabilities into your product or application. You can use a combination of these APIs along with Stream's Moderation Dashboard to prevent app users from posting harmful content on your platform. Additionally, you can build a custom dashboard using these APIs.
Let's examine each of the moderation APIs provided by Stream.
Prerequisites
- Make sure you have created the app on Stream. If not, please refer to Guide to Getting Started with Stream
- The following API documentation presents examples from Stream's server-side SDKs. See the SDK setup guides for Node, Python, Go, Java, PHP, Ruby, or .NET.
Moderation Check Endpoint
The Moderation Check Endpoint enables real-time content moderation by analyzing submitted content and providing actionable recommendations. This comprehensive API supports multiple content formats, including text messages, images, and video content.
The process follows these simple steps:
- Set up your moderation policy with defined rules and actions (see Policy Setup)
- Submit content for review along with your policy key
- The API analyzes the content using configured moderation engines and rules
- Receive an immediate recommendation based on the analysis
By integrating this endpoint into your application workflow, you can efficiently maintain content quality and ensure compliance with your platform's standards while providing a safe environment for your users.
API Usage
In the context of the Moderation API, the term "entity" refers to a piece of content that you send for moderation. This can be text, an image, and/or a video.
response, err := client.Moderation().Check(ctx, &getstream.CheckRequest{
EntityType: "entity_type",
EntityID: "entity_id",
EntityCreatorID: "entity_creator_id",
ModerationPayload: &getstream.ModerationPayload{
Texts: []string{"this is bullshit", "f*ck you as***le"},
Images: []string{"example.com/test.jpg"},
},
ConfigKey: getstream.PtrTo("config_key"),
Options: map[string]any{"force_sync": true},
})Request Params
| key | required | type | description |
|---|---|---|---|
| entity_type | true | string | This is identifier for the type of content you are sending for moderation. This helps with categorizing content on dashboard. E.g., if you have multiple products then you can set unique entity_type for content coming from each product. It could be any string. |
| entity_id | true | string | Unique identifier for entity |
| entity_creator_id | true | string | Unique identifier for user who created this entity. Generally this is the user id of the app user. |
| moderation_payload | true | object | Entity or content to be moderated |
| config_key | true | string | Key of the config to be used for moderation. See Configuration for details. |
Response
| key | type | description |
|---|---|---|
| status | “complete” , “partial” | Status of moderation. In case you have configured both synchronous and asynchronous (image moderation) moderations, then status will be “partial”. This is because async moderations are still running in background task. |
| task_id | string | Id of the task running the async moderation. You can check the status of the task itself using GetTask endpoint. |
| recommended_action | “flag” , “remove” , “keep” | Final result of moderation which suggest what action should be taken for moderated entity.
|
| item | object | This is basically a json representation of the review queue item accessible on Stream dashboard. You can use this to compose your own dashboard UI. |
Related APIs
The moderation API is organized across several pages. After checking content, you will typically need to:
- Configuration — Create and manage moderation policies (configs) that define which engines and rules to use.
- Flag, Mute & Ban — Flag content, mute users, ban users, and block users.
- Review Queue & Actions — Query the review queue and take actions on flagged content (delete, ban, restore, etc.).
- Moderation Rules — Define automated rules that trigger actions based on moderation results.
- Appeals — Allow users to appeal moderation decisions.
- Logs & Analytics — Query moderation logs, export data, and view analytics.
Custom Check
Custom Check enables you to submit your own AI moderation results of a chat message/custom uploaded content for review by moderators on the Dashboard.
Add Custom Flags to a Chat Message
Custom Flags can be added to a Chat message. This enables you to submit your own AI moderation results of a chat message for review by moderators on the Dashboard.
response, err := client.Moderation().CustomCheck(ctx, &getstream.CustomCheckRequest{
EntityType: "stream:chat:v1:message",
EntityID: messageID,
EntityCreatorID: "user_id",
Flags: []getstream.CustomCheckFlag{
{Type: "custom_check_image", Reason: getstream.PtrTo("Image was NSFW"), Labels: []string{"NSFW"}},
{Type: "custom_check_text", Reason: getstream.PtrTo("Text was harmful"), Labels: []string{"harmful"}},
{Type: "custom_check_video", Reason: getstream.PtrTo("Video contains copyright material"), Labels: []string{"copyright-violation"}},
},
})Add Custom Check Flags to the Custom Content
The Custom Check endpoint enables you to submit your own AI moderation results for the custom content for review by moderators on the Dashboard.
response, err := client.Moderation().CustomCheck(ctx, &getstream.CustomCheckRequest{
EntityType: "entity_type",
EntityID: "entity_id",
EntityCreatorID: "entity_creator_id",
ModerationPayload: &getstream.ModerationPayload{
Texts: []string{"offensive"},
},
Flags: []getstream.CustomCheckFlag{
{
Type: "custom_check_text",
Reason: getstream.PtrTo("Text was offensive"),
Custom: map[string]any{"explaination": "custom is a nullable field allowing you to attach a custom object to the flag for your specific use cases or requirements"},
},
},
})Request Params
| key | required | type | description |
|---|---|---|---|
| entity_type | true | string | This is identifier for the type of the entity. This helps with categorizing Content. |
| entity_id | true | string | Unique identifier for entity. |
| entity_creator_id | true | string | Unique identifier for user who created this entity. Generally this is the user id of the app user. |
| moderation_payload | true | object | Entity or content to be moderated |
| flags | true | array | List of custom flags |
| flag.type | true | "custom_check_text", "custom_check_image", "custom_check_video" | Type of a custom flag |
Response
| key | type | description |
|---|---|---|
| status | “complete” , “partial” | Status of moderation. In case you have configured both synchronous and asynchronous (image moderation) moderations, then status will be “partial”. This is because async moderations are still running in background task. |
| id | string | id of the review queue item. |
| item | object | This is basically a json representation of the review queue item accessible on Stream dashboard. You can use this to compose your own dashboard UI. |