# Video Moderation

Stream's video moderation engine, powered by [AWS Rekognition](https://aws.amazon.com/rekognition/), provides real-time content analysis of video uploads through advanced AI processing. Unlike image moderation, video analysis occurs frame-by-frame asynchronously to ensure thorough content review without impacting application performance.

Key Features:

- Detect inappropriate content, violence, and explicit material
- Analyze video frames for policy violations
- Moderate video content at scale
- Receive frame-by-frame analysis with timestamps
- Get confidence scores for detected violations

## Supported Categories

Our video moderation engine analyzes frames to detect:

- **Explicit Content**:
  - Pornographic material
  - Explicit sexual acts
- **Non-Explicit Nudity**:
  - Exposed intimate body parts
  - Intimate physical contact
- **Swimwear/Underwear**:
  - People in revealing swimwear
  - Underwear detection
- **Violence & Visually Disturbing**:
  - Fighting and physical aggression
  - Weapons
  - Gore and blood
  - Disturbing imagery
- **Substances**:
  - Drug-related content and paraphernalia
  - Tobacco products
  - Alcoholic beverages
- **Inappropriate Content**:
  - Rude gestures
  - Offensive body language
  - Gambling-related content
  - Hate symbols and extremist imagery

## Limitations

- We currently only support video moderation for `mp4` and `mov` file. Additionally, the file MIME type must be either `video/mp4` or `video/mov`. To avoid users uploading any other file types, you can configure File Upload Whitelist/Blocklists from dashboard

<admonition type="warning">

Videos recorded on iPhone cameras are typically saved as `video/quicktime` MIME type, which is not currently supported for AI Video File Moderation

</admonition>

- Video Moderation works asynchronously, and thus video will be accepted immediately without any latency. But it will take few seconds or minutes to complete the moderation.
  Once the moderation is complete, the video will be either blocked or flagged depending on the configured rules.

## Configuration

## How It Works

When a video is uploaded:

1. The video is accepted immediately to maintain low latency.
2. Analysis begins asynchronously in the background.
3. AI models evaluate the video against all configured categories.
4. If confidence thresholds are met, configured actions are applied.
5. The message or post is updated based on moderation results.
6. Flagged videos appear in the [Media Queue](https://dashboard.getstream.io/app/1320873/moderation/media) for moderator review.
7. Receive frame-by-frame analysis with timestamps.

## Best Practices

For optimal video moderation:

- Configure confidence scores based on your tolerance for false positives
- Use the "Flag" action for borderline cases that need human review
- Use "Block" for clearly inappropriate content
- Monitor the Media Queue regularly to validate automated decisions
- Review frame-by-frame analysis with timestamps
- Adjust thresholds based on observed accuracy
- Consider your audience and community standards when configuring rules


---

This page was last updated at 2026-04-16T18:29:44.311Z.

For the most recent version of this documentation, visit [https://getstream.io/moderation/docs/node/engines/ai-video-file-moderation/](https://getstream.io/moderation/docs/node/engines/ai-video-file-moderation/).