Image Moderation Confused about "Image Moderation"?
Let us know how we can improve our documentation:
Confused about "Image Moderation"?
Let us know how we can improve our documentation:
- On This Page:
- Enabling Image Moderation
Image moderation determines if an image contains unsafe content, such as explicit adult content or violent content. Messages that are deemed unsuitable by Stream’s chosen external image moderation partner are flagged and displayed in the moderation dashboard.
Images are given labels based on their content. By default any image labeled with any of these is flagged: "Explicit Nudity", "Violence" and "Visually Disturbing".
You can configure your application to use a different list of labels for image moderation. Labels are organized in 2 levels, meaning that a top-level label will match all 2nd level labels.
Top-level LABEL | 2nd level Labels |
---|---|
Explicit Nudity | Nudity, Graphic Male Nudity, Graphic Female Nudity, Sexual Activity, Illustrated Nudity Or Sexual Activity, Adult Toys |
Suggestive | Female Swimwear Or Underwear, Male Swimwear Or Underwear, Partial Nudity, Revealing Clothes |
Violence | Graphic Violence Or Gore, Physical Violence, Weapon Violence, Weapons, Self Injury |
Visually Disturbing | Emaciated Bodies, Corpses, Hanging |
Stream isn't an authority on, and doesn't in any way claim to be an exhaustive filter of, unsafe content. Additionally, the image moderation does not detect whether an image includes illegal content, such as child pornography.
Enabling Image ModerationCopied!Confused about "Enabling Image Moderation"?
Let us know how we can improve our documentation:
Confused about "Enabling Image Moderation"?
Let us know how we can improve our documentation:
To enable image moderation for your app, set the image_moderation_enabled
setting to true
.
1
2
3
await client.updateAppSettings({
image_moderation_enabled: true,
});