Build low-latency Vision AI applications using our new open-source Vision AI SDK. ⭐️ on GitHub ->

Peerspace Scales Messaging Safely With Stream Chat & AI Moderation

Emily N.
Emily N.
Published January 16, 2026

Peerspace is the leading marketplace for booking unique spaces for meetings, productions, and events. The platform connects guests with hosts through real-time, in-app messaging, enabling seamless coordination, faster bookings, and stronger trust on both sides of the marketplace.

For Peerspace, keeping conversations inside the platform is a strategic priority. In-app messaging reduces reliance on third-party tools, such as email or SMS, where visibility, enforcement, and user protection are limited. But keeping users on-platform only works if those conversations are safe.

The Challenge: Scaling Trust as Messaging Opens Up

As Peerspace grew, so did the importance of messaging and the risk that came with it.

Bad actors began targeting hosts with phishing attempts that impersonated Peerspace support, urging them to "verify" their identity or click on malicious links. These messages were often subtle, designed to look legitimate, and aimed squarely at the most valuable users on the platform.

Earlier on, messaging rules were more permissive. Users who hadn't completed a booking could still initiate conversations and include links, which increased exposure to scams. Hosts and guests would report suspicious messages, but moderation was largely reactive.

At the same time, Peerspace was intentionally working to reduce off-platform communication. That meant in-app chat needed to be both frictionless and well-protected.

Why Peerspace Uses Stream Chat and AI Moderation

Peerspace adopted Stream Chat and began using Stream's AI Moderation API about a year ago, replacing earlier, more rudimentary approaches.

Before Stream, moderation relied on basic word and domain blocks; effective for obvious abuse, but insufficient for sophisticated phishing and impersonation attempts.

Stream offered Peerspace:

  • Native moderation built directly into chat

  • Advanced filters with keyword, domain, and regex support

  • AI-based labeling to detect nuanced scam patterns

  • A centralized moderation dashboard usable across teams

This made it possible to scale safety alongside messaging—without pushing conversations off-platform.

How Peerspace Uses Stream AI Moderation

Centralized Rule Creation, Distributed Oversight

Moderation strategy and rule creation are led from an IT security standpoint by Peerspace's security team, who are responsible for designing and maintaining advanced filters.

While one person owns the creation and tuning of moderation rules, Peerspace's Trust & Safety team uses the Stream moderation dashboard to:

  • Monitor flagged messages

  • Review AI-labeled content

  • Respond to user reports of suspected scams

This setup allows Peerspace to centralize control while still giving Trust & Safety teams the visibility they need to act quickly and confidently.

Advanced Filters to Prevent Marketplace Abuse

Peerspace uses Stream's advanced filters to combat scams that disproportionately target hosts, including:

  • Impersonation language posing as Peerspace support

  • Suspicious links and domains

  • Regex rules to catch subtle text variations

These filters are designed specifically for guest-to-host communication, where phishing risk is highest.

Over time, filters have been refined as scam tactics evolved, moving from simple blocks to more nuanced detection.

Custom AI Prompts for Phishing Detection

To go beyond static rules, Peerspace worked directly with Stream's moderation team to develop a custom AI prompt focused on identifying phishing and impersonation attempts.

This AI-driven approach helps:

  • Detect scams that don't rely on obvious keywords

  • Identify messages that mimic official Peerspace language

  • Improve accuracy while reducing false positives

As Stream's AI models and labels have improved year over year, Peerspace has seen increasingly reliable detection without needing to constantly rebuild rules from scratch.

Policy, Product, and Moderation Working Together

Moderation at Peerspace isn't just about filters—it's reinforced by product-level safeguards.

Today:

  • Users cannot send messages containing links before a booking is initiated. Guests can still message hosts with booking requests before a booking occurs; however, they cannot include links in those messages.

  • Suspicious messages can still be reported by users, feeding into moderation review

  • AI moderation operates in real time to prevent harmful messages from reaching hosts

This layered approach ensures that trust isn't dependent on a single system, but reinforced across the platform.

Visibility That Builds Confidence in In-App Messaging

For an organization like Peerspace, visibility into moderation outcomes is essential to maintaining trust in in-app chat.

Stream's moderation dashboard allows teams to:

  • Understand why content was flagged

  • Confirm that moderation rules are working as intended

  • Stand behind in-app messaging as a safe alternative to third-party tools

As the dashboard has evolved, Peerspace has benefited from clearer insights and more accurate AI labels, making moderation easier to manage as usage scales.

Keeping Conversations In-App—Safely

Peerspace's long-term goal is clear: keep host-guest communication inside the platform, where it can be protected, moderated, and improved.

Stream Chat enables real-time, high-quality conversations. Stream AI Moderation is what makes those conversations safe at scale.

By reducing phishing attempts and increasing confidence in in-app messaging, Peerspace minimizes the need for users to move conversations off-platform, thereby protecting both users and the marketplace itself.

The Results

With Stream Chat and AI Moderation in place, Peerspace has been able to:

  • Reduce phishing attempts targeting hosts

  • Detect impersonation scams more accurately over time

  • Empower Trust & Safety teams with better visibility

  • Keep more conversations safely in-app

Moderation runs quietly in the background, protecting users without disrupting legitimate communication.

Why It Works

Peerspace's success comes from combining:

  1. In-app chat that reduces third-party messaging risk

  2. AI-powered moderation tuned to marketplace abuse

  3. Clear ownership with shared visibility across teams

  4. Continuous improvement as Stream's dashboard and AI evolve

For marketplaces where trust, messaging, and safety are inseparable, Stream provides the foundation to scale all three together.

Ready to Increase App Engagement?
Integrate Stream’s real-time communication components today and watch your engagement rate grow overnight!
Start Coding Free