Image Moderation

Last Edit: Sep 22 2020

Image moderation determines if an image contains unsafe content, such as explicit adult content or violent content. Messages that are deemed unsuitable by Stream’s chosen external image moderation partner are flagged and displayed in the moderation dashboard.

Images are given labels based on their content. By default any image labeled with any of these is flagged: "Explicit Nudity", "Violence" and "Visually Disturbing".

You can configure your application to use a different list of labels for image moderation. Labels are organized in 2 levels, meaning that a top-level label will match all 2nd level labels.

Top-level LABEL 2nd level Labels
Explicit Nudity Nudity, Graphic Male Nudity, Graphic Female Nudity, Sexual Activity, Illustrated Nudity Or Sexual Activity, Adult Toys
Suggestive Female Swimwear Or Underwear, Male Swimwear Or Underwear, Partial Nudity, Revealing Clothes
Violence Graphic Violence Or Gore, Physical Violence, Weapon Violence, Weapons, Self Injury
Visually Disturbing Emaciated Bodies, Corpses, Hanging

Stream isn't an authority on, and doesn't in any way claim to be an exhaustive filter of, unsafe content. Additionally, the image moderation does not detect whether an image includes illegal content, such as child pornography.

Image moderation is not included in standard plans, reach out to support if you want to enable it for your application.