Getting Started

Stream's Moderation API lets you integrate content moderation into any application. This guide walks you through installation, configuration, and your first moderation check.

Installation

npm install @stream-io/stream-node
# or
yarn add @stream-io/stream-node

Initialize the Client

const { StreamClient } = require("@stream-io/stream-node");

const client = new StreamClient("YOUR_API_KEY", "YOUR_API_SECRET");

Create a Moderation Policy

Before checking content, create a moderation configuration that defines which rules to apply:

await client.moderation.upsertConfig({
  key: "my_config",
  ai_text_config: {
    rules: [
      { label: "SPAM", action: "flag" },
      { label: "HARASSMENT", action: "remove" },
    ],
  },
  block_list_config: {
    rules: [{ name: "profanity_en", action: "remove" }],
  },
});

For Stream Chat, use config key chat:messaging. For Stream Feeds, use feeds. See Configuration for details.

Check Content

const response = await client.moderation.check({
  entity_type: "stream:chat:v1:message",
  entity_id: "message-123",
  entity_creator_id: "user-456",
  moderation_payload: {
    texts: ["Hello, this is a test message"],
  },
  config_key: "my_config",
});

console.log(response.recommended_action); // "keep", "flag", or "remove"

Handle the Response

The response includes:

  • recommended_action -- "keep", "flag", or "remove"
  • status -- "complete" or "partial" (if async checks are still running)
  • item -- the review queue item (if content was flagged or removed)
if (response.recommended_action === "keep") {
  // Content is safe, no action needed
} else if (response.recommended_action === "flag") {
  // Content is suspicious, send to review queue
  console.log("Flagged for review:", response.item);
} else if (response.recommended_action === "remove") {
  // Content violates policies, remove it
  console.log("Content removed:", response.item);
}

Next Steps