const { message } = await channel.sendMessage(
{
text,
attachments,
user_id: sentBy,
},
{
force_moderation: true,
},
);Stream Chat
Integrating moderation in Stream Chat is a straightforward process that requires no code changes. You simply need to configure moderation policies through the dashboard, and Stream will automatically handle content moderation for your chat application. This guide will walk you through setting up moderation policies and monitoring flagged content through the Stream dashboard.
Configure Moderation Policy
A moderation policy is a set of rules that define how content should be moderated in your application. See Configuration for details on setting up moderation policies.
Auto Moderation on Message
Client side integration
If you are using any of our Chat SDKs, you can immediately start seeing the moderation in action. Think of it as having a highly efficient moderation team working 24/7 behind the scenes—except this one runs on algorithms instead of coffee! ☕️
Server side integration
If you are using the SendMessage API from server side, then by default messages will be skipped for moderation. You can enable moderation for messages by setting the force_moderation option to true.
Its important to note that if the message is blocked, SendMessage API will NOT return an error. But instead the message object will have a type of error.
The moderation object in the response includes the action taken and detailed information about why the message was moderated. The example response is as following:
{
"message": {
"id": "2b56e00f-4149-465f-b1e7-62d892788f85",
"type": "error",
"text": "Message was blocked by moderation policies",
"moderation": {
"action": "remove",
"original_text": "F*** you motherf***er",
"text_harms": ["toxicity", "insult"],
"blocklists_matched": ["profanity-list"]
},
"user": {
"id": "little-lake-2",
"name": "little-lake-2",
...
},
"cid": "messaging:first",
"created_at": "2024-11-21T13:13:38.071535Z",
"updated_at": "2024-11-21T13:13:38.071535Z",
"shadowed": false,
...
},
"duration": "136.06ms"
}The moderation object contains the following fields:
| Field | Type | Description |
|---|---|---|
action | string | The moderation action taken: bounce, remove, or flag |
original_text | string | The original message text before moderation |
text_harms | string[] | Labels from AI text or toxicity detection (e.g., ["toxicity", "insult"]) |
image_harms | string[] | Labels from AI image moderation (e.g., ["nudity", "violence"]) |
blocklists_matched | string[] | Names of blocklists that matched (e.g., ["profanity-list", "spam-words"]) |
The detailed moderation labels (text_harms, image_harms, blocklists_matched) are only included in server-side responses. Client-side responses only include action and original_text.
When a message is blocked by moderation, no WebSocket events (message.new) will be triggered for that message. This means other users in the channel will not receive any notification or see the blocked message in their chat interface. The message is effectively prevented from being delivered to any recipients.
User-Driven Actions
In addition to automated moderation, Stream's Chat SDK provides methods for users to take moderation actions against other users. These actions help maintain community standards by allowing users to report inappropriate behavior or content. For detailed information on effects of these actions, please check the documentation for Chat Moderation Tools. Here we cover some basic examples on how to use these actions.
Flag User
Users can flag another user with a reason. This will trigger a review in the moderation dashboard.
If you are using Chat UI SDK, users can flag the user from user actions menu.
If you have your own custom UI, you can use the following API to flag the user:
await client.moderation.flag({
entity_type: "stream:user",
entity_id: "user_id",
reason: "reason",
user_id: "user_id", // User ID who is flagging the user, required only for server-side integration.
});Flag Message
Users can flag a message, which will trigger a review in the moderation dashboard.
If you are using Chat UI SDK, users can flag the message from message actions menu.
If you have your own custom UI, you can use the following API to flag the message:
await client.moderation.flag({
entity_type: "stream:chat:v1:message",
entity_id: "message_id",
reason: "reason",
user_id: "user_id", // User ID who is flagging the message, required only for server-side integration.
});Ban User
A user with moderator permissions can ban a user from the chat. For more details, please check the documentation for Banning Users.
await client.ban({
target_user_id: "target_user_id",
banned_by_id: "user_id", // User ID who is banning the user.
});Unban User
A user with moderator permissions can unban a user from the chat.
await client.unban({
target_user_id: "target_user_id",
});