# Image Moderation

Stream's image moderation engine, powered by [AWS Rekognition](https://aws.amazon.com/rekognition/), uses advanced AI to detect potentially harmful or inappropriate images asynchronously. This means that images are processed in the background to avoid any latency in your application, with moderation results and configured actions being applied as soon as the analysis is complete.

## Supported Categories

The image moderation engine can detect the following categories of content:

- **Explicit**: Detects pornographic content and explicit sexual acts.
- **Non-Explicit Nudity**: Identifies non-pornographic nudity, including exposed intimate body parts and intimate kissing.
- **Swimwear/Underwear**: Detects images of people in revealing swimwear or underwear.
- **Violence**: Identifies violent acts, fighting, weapons, and physical aggression.
- **Visually Disturbing**: Detects gore, blood, mutilation, and other disturbing imagery.
- **Drugs & Tobacco**: Recognizes drug-related content, paraphernalia, and tobacco products.
- **Alcohol**: Identifies alcoholic beverages and alcohol consumption.
- **Rude Gestures**: Detects offensive hand gestures and inappropriate body language.
- **Gambling**: Identifies gambling-related content, including cards, dice, and slot machines.
- **Hate Symbols**: Recognizes extremist symbols, hate group imagery, and offensive symbols.

## Confidence Scores

For each category, you can set a confidence score threshold between 1 and 100 that determines when actions should be triggered. For example:

- A threshold of 90 is very strict and will only trigger on highly confident matches.
- A threshold of 60 is more lenient and may catch more potentially problematic content.
- Lower thresholds increase sensitivity but may lead to more false positives.

We recommend starting with default thresholds and adjusting based on your needs:

- Use higher thresholds (80-90) for categories requiring strict enforcement.
- Use medium thresholds (60-70) for categories needing balanced detection.
- Use lower thresholds (40-50) for maximum sensitivity where false positives are acceptable.

## Configuration

## How It Works

When an image is uploaded:

1. The image is accepted immediately to maintain low latency.
2. Analysis begins asynchronously in the background.
3. AI models evaluate the image against all configured categories.
4. If confidence thresholds are met, configured actions are applied.
5. The message or post is updated based on moderation results.
6. Flagged images appear in the Media Queue for moderator review.

## Best Practices

- Configure confidence scores based on your tolerance for false positives.
- Use the "Flag" action for borderline cases that need human review.
- Use "Block" for clearly inappropriate content.
- Monitor the Media Queue regularly to validate automated decisions.
- Adjust thresholds based on observed accuracy.
- Consider your audience and community standards when configuring rules.


---

This page was last updated at 2026-04-16T18:29:40.208Z.

For the most recent version of this documentation, visit [https://getstream.io/moderation/docs/node/engines/image-moderation/](https://getstream.io/moderation/docs/node/engines/image-moderation/).