Image Moderation

LAST EDIT Nov 04 2024

Image moderation determines if an image contains unsafe content, such as explicit adult content or violent content. Messages that are deemed unsuitable by Stream’s chosen external image moderation partner are flagged and displayed in the moderation dashboard.

Image moderation works by detecting links in messages and passing the content of that link to our AI moderation service. This means that AI moderation is agnostic about whether you use Stream CDN to host your images or a different hosting solution.

Images are given labels based on their content. By default any image labeled with any of these is flagged: "Explicit Nudity", "Violence" and "Visually Disturbing". These flagged images will then be available for review in the moderation dashboard where you team can take actions such as deleting the message and banning the user.

You can configure your application to use a different list of labels for image moderation. Labels are organized in 2 levels, meaning that a top-level label will match all 2nd level labels.

Top-level LABEL

2nd level Labels

Explicit Nudity

Nudity, Graphic Male Nudity, Graphic Female Nudity, Sexual Activity, Illustrated Nudity Or Sexual Activity, Adult Toys

Suggestive

Female Swimwear Or Underwear, Male Swimwear Or Underwear, Partial Nudity, Revealing Clothes

Violence

Graphic Violence Or Gore, Physical Violence, Weapon Violence, Weapons, Self Injury

Visually Disturbing

Emaciated Bodies, Corpses, Hanging

Limitations

Copied!

AI Moderation (in general) and advanced moderation from Stream Chat (in specific) are powerful tools but they are not magic and work best when you are mindful of their limitations. For that reason, we want to spell out some of the weaknesses in AI Image moderation and suggest some best practices for compensating for them.

Limitation 1: Neither Stream (nor the tools we use) are an authority on what is or is not offensive

Copied!

First, While we endeavor to build tools that are sensitive to a diverse range of concerns, we are not the experts on what is or is not harmful. The appropriateness of content is complex, contextual, and evolving which makes it impossible to cover every possible case.

Best Practice: Be prepared to handle false negatives (and false positives for that matter.) Support flagging for content and consider using our Advanced Moderation feature to give moderators access to the live moderation dashboard.

Limitation 2: Image moderation cannot tell if content is illegal

Copied!

Our image recognition is built on top of AWS Rekognition and has the same limitations as that service. While Rekoginition is good at identifying generally “what” is in an image, it cannot say whether the contents are illegal or not.

Best Practice: AI moderation for images as a powerful tool to make a small team of moderators more productive instead of as a complete solution.