Moderate Username and Images

This guide explains how to implement moderation for usernames and images in your application using Stream's Moderation APIs. You'll learn how to check usernames before user creation/updates and moderate image content to maintain a safe environment for your users.

Currently on JS SDK, Ruby SDK and .NET SDK supports this endpoint, but we are working on adding support for other SDKs. If you need this feature in other SDKs, please let us know at support@getstream.io.

This endpoint usage will be counted towards your text and image moderation quota.

Overview

User profile moderation allows you to check both usernames and profile images using a single API endpoint and unified policy configuration. You can:

  • Moderate usernames using AI (LLM), custom blocklists, or pattern-based detection
  • Moderate profile images using AWS Rekognition, Bodyguard Image Analysis, and OCR
  • Combine multiple providers for layered protection
  • Configure once - the same policy handles both usernames and images

Configuring User Profile Moderation

Username and profile image moderation can operate in different modes depending on your configuration. You can combine multiple moderation providers (LLM, blocklists, OCR, etc.) for comprehensive protection.

Step 1: Configure Your Moderation Policy

  1. Navigate to the Stream Dashboard
  2. Go to ModerationPolicies
  3. Select or create a policy for user profile moderation (e.g., User Profile)
  4. Configure the providers you want to use

Username Moderation

Before creating or updating a user profile, you can check if the username complies with basic moderation rules to help maintain a safe environment for your users.

  • Most advanced detection using Large Language Models
  • Catches nuanced violations and creative circumvention
  • Out-of-the-box labels provided include PROFANITY, HATE_SPEECH, IMPERSONATION, SCAM, MISLEADING_IDENTITY

These labels cover the most common moderation scenarios and help you get started quickly. You can customize actions for each label or add your own.

Blocklists

  • Create custom blocklists in Moderation → Advanced Filters
  • Fast pattern matching for known bad words
  • Uses substring matching for usernames (e.g., blocking "bad" matches "baduser123")

Check Username

// Check username before user creation/update
const response = await client.moderation.checkUserProfile("user-id", {
  username: "username_to_check",
});

// Response will indicate if the username is acceptable
if (response.recommended_action === "keep") {
  // Username is acceptable, proceed with user creation/update
} else {
  // Username violates moderation rules, ask user to choose a different name
}

Profile Image Moderation

You can moderate images using Stream's AI Image Moderation Engine. This helps detect inappropriate or harmful image content before it's published on your platform.

Check Profile Images

// Check profile image before allowing upload
const response = await client.moderation.checkUserProfile("user-id", {
  image: "https://example.com/profile.jpg",
});

// Handle the moderation result
if (response.recommended_action === "keep") {
  // Image is safe to use as profile picture
} else {
  // Image violates moderation rules
}

Dashboard Integration

When LLM or blocklists are configured for usernames, or when image moderation is enabled, flagged content appears in your moderation dashboard.