Check Content

Overview

Moderation APIs allow you to integrate moderation capabilities into your product or application. You can use a combination of these APIs along with Stream's Moderation Dashboard to prevent app users from posting harmful content on your platform. Additionally, you can build a custom dashboard using these APIs.

Let's examine each of the moderation APIs provided by Stream.

Prerequisites

Moderation Check Endpoint

The Moderation Check Endpoint enables real-time content moderation by analyzing submitted content and providing actionable recommendations. This comprehensive API supports multiple content formats, including text messages, images, and video content.

The process follows these simple steps:

  1. Set up your moderation policy with defined rules and actions (see Policy Setup)
  2. Submit content for review along with your policy key
  3. The API analyzes the content using configured moderation engines and rules
  4. Receive an immediate recommendation based on the analysis

By integrating this endpoint into your application workflow, you can efficiently maintain content quality and ensure compliance with your platform's standards while providing a safe environment for your users.

API Usage

In the context of the Moderation API, the term "entity" refers to a piece of content that you send for moderation. This can be text, an image, and/or a video.

response, err := client.Moderation().Check(ctx, &getstream.CheckRequest{
    EntityType:      "entity_type",
    EntityID:        "entity_id",
    EntityCreatorID: "entity_creator_id",
    ModerationPayload: &getstream.ModerationPayload{
        Texts:  []string{"this is bullshit", "f*ck you as***le"},
        Images: []string{"example.com/test.jpg"},
    },
    ConfigKey: getstream.PtrTo("config_key"),
    Options:   map[string]any{"force_sync": true},
})

Request Params

keyrequiredtypedescription
entity_typetruestringThis is identifier for the type of content you are sending for moderation. This helps with categorizing content on dashboard. E.g., if you have multiple products then you can set unique entity_type for content coming from each product. It could be any string.
entity_idtruestringUnique identifier for entity
entity_creator_idtruestringUnique identifier for user who created this entity. Generally this is the user id of the app user.
moderation_payloadtrueobjectEntity or content to be moderated
config_keytruestringKey of the config to be used for moderation. See Configuration for details.

Response

keytypedescription
status“complete” , “partial”Status of moderation. In case you have configured both synchronous and asynchronous (image moderation) moderations, then status will be “partial”. This is because async moderations are still running in background task.
task_idstringId of the task running the async moderation. You can check the status of the task itself using GetTask endpoint.
recommended_action“flag” , “remove” , “keep”Final result of moderation which suggest what action should be taken for moderated entity.
  • “flag” suggests that the content needs manual review either on Stream dashboard or custom dashboard you may have
  • “remove” suggests that this content should be removed from the platform. You will be able to access/review this content from Stream dashboard
  • “keep” suggests that this content is safe and doesn’t contain any harms
itemobjectThis is basically a json representation of the review queue item accessible on Stream dashboard. You can use this to compose your own dashboard UI.

The moderation API is organized across several pages. After checking content, you will typically need to:

  • Configuration — Create and manage moderation policies (configs) that define which engines and rules to use.
  • Flag, Mute & Ban — Flag content, mute users, ban users, and block users.
  • Review Queue & Actions — Query the review queue and take actions on flagged content (delete, ban, restore, etc.).
  • Moderation Rules — Define automated rules that trigger actions based on moderation results.
  • Appeals — Allow users to appeal moderation decisions.
  • Logs & Analytics — Query moderation logs, export data, and view analytics.

Custom Check

Custom Check enables you to submit your own AI moderation results of a chat message/custom uploaded content for review by moderators on the Dashboard.

Add Custom Flags to a Chat Message

Custom Flags can be added to a Chat message. This enables you to submit your own AI moderation results of a chat message for review by moderators on the Dashboard.

response, err := client.Moderation().CustomCheck(ctx, &getstream.CustomCheckRequest{
    EntityType:      "stream:chat:v1:message",
    EntityID:        messageID,
    EntityCreatorID: "user_id",
    Flags: []getstream.CustomCheckFlag{
        {Type: "custom_check_image", Reason: getstream.PtrTo("Image was NSFW"), Labels: []string{"NSFW"}},
        {Type: "custom_check_text", Reason: getstream.PtrTo("Text was harmful"), Labels: []string{"harmful"}},
        {Type: "custom_check_video", Reason: getstream.PtrTo("Video contains copyright material"), Labels: []string{"copyright-violation"}},
    },
})

Add Custom Check Flags to the Custom Content

The Custom Check endpoint enables you to submit your own AI moderation results for the custom content for review by moderators on the Dashboard.

response, err := client.Moderation().CustomCheck(ctx, &getstream.CustomCheckRequest{
    EntityType:      "entity_type",
    EntityID:        "entity_id",
    EntityCreatorID: "entity_creator_id",
    ModerationPayload: &getstream.ModerationPayload{
        Texts: []string{"offensive"},
    },
    Flags: []getstream.CustomCheckFlag{
        {
            Type:   "custom_check_text",
            Reason: getstream.PtrTo("Text was offensive"),
            Custom: map[string]any{"explaination": "custom is a nullable field allowing you to attach a custom object to the flag for your specific use cases or requirements"},
        },
    },
})

Request Params

keyrequiredtypedescription
entity_typetruestringThis is identifier for the type of the entity. This helps with categorizing Content.
entity_idtruestringUnique identifier for entity.
entity_creator_idtruestringUnique identifier for user who created this entity. Generally this is the user id of the app user.
moderation_payloadtrueobjectEntity or content to be moderated
flagstruearrayList of custom flags
flag.typetrue"custom_check_text", "custom_check_image", "custom_check_video"Type of a custom flag

Response

keytypedescription
status“complete” , “partial”Status of moderation. In case you have configured both synchronous and asynchronous (image moderation) moderations, then status will be “partial”. This is because async moderations are still running in background task.
idstringid of the review queue item.
itemobjectThis is basically a json representation of the review queue item accessible on Stream dashboard. You can use this to compose your own dashboard UI.