Image Moderation

LAST EDIT Apr 26 2021

Image moderation determines if an image contains unsafe content, such as explicit adult content or violent content. Messages that are deemed unsuitable by Stream’s chosen external image moderation partner are flagged and displayed in the moderation dashboard.

Images are given labels based on their content. By default any image labeled with any of these is flagged: "Explicit Nudity", "Violence" and "Visually Disturbing". These flagged images will then be available for review in the moderation dashboard where you team can take actions such as deleting the message and banning the user.

You can configure your application to use a different list of labels for image moderation. Labels are organized in 2 levels, meaning that a top-level label will match all 2nd level labels.

Top-level LABEL

2nd level Labels

Explicit Nudity

Nudity, Graphic Male Nudity, Graphic Female Nudity, Sexual Activity, Illustrated Nudity Or Sexual Activity, Adult Toys

Suggestive

Female Swimwear Or Underwear, Male Swimwear Or Underwear, Partial Nudity, Revealing Clothes

Violence

Graphic Violence Or Gore, Physical Violence, Weapon Violence, Weapons, Self Injury

Visually Disturbing

Emaciated Bodies, Corpses, Hanging

Stream isn't an authority on, and doesn't in any way claim to be an exhaustive filter of, unsafe content. Additionally, the image moderation does not detect whether an image includes illegal content, such as child pornography.

Enabling Image Moderation

Copied!

To enable image moderation for your app, set the image_moderation_enabled setting to true.