Reactive moderation responds to harmful content after users share it on a platform. This method typically relies on human moderators to monitor chat channels, and user reports to be actioned by moderators or administrators of the forum.
Reactive moderation can also include user-to-user moderation, empowering users to prevent or remove inappropriate content created by other users. For instance, users can mute others who are spamming their inboxes, or they can flag a message for administrative review that they believe breaks the community guidelines.