Auto Moderation Reduces In-App Phishing Attempts & Harmful Content by 90% for Sports Card Marketplace
Challenge: After integrating Stream's Chat API to connect its community of sports memorabilia collectors and hobbyists, CollX's user retention rates skyrocketed by 75%. Recognizing the value of the app's loyal user base, Director of Operations Alexander Liriano considered the safety of CollX's user experience as a top priority. But, to effectively scale moderation efforts, Liriano needed to incorporate an AI-powered moderation solution with his strategy.
Strategy: Stream Chat's strong performance was a good indicator to the CollX team that Stream's Auto Moderation product would also suit their use case. Liriano connected with the Principal Product Manager of Moderation, Adnan Al-Khatib, who demonstrated the clear value of an AI-powered moderation tool and how Stream could alleviate the manual review workload for CollX.
Result: By leveraging Auto Moderation's powerful AI features, Liriano no longer spends hours manually monitoring bad actors who create multiple alias accounts and attempt to disrupt the user experience of CollX. Implementing Stream's Auto Moderation has resulted in a 90% reduction in phishing attempts and harmful content in the app.
CollX launched in early 2022 as the fastest way for collectors to learn the value of their sports trading cards. CollX integrated Stream's chat API to add a social component to the platform. In-app messaging enables enthusiasts to chat with one another or buy, sell, trade, and negotiate cards. CollX saw a 75% improvement in user retention after implementing Stream Chat.
The Need for In-App Chat Moderation
As CollX's user base grew, so did the number of bad actors and phishing attempts. Malicious users attempted to phish for the personal and financial information of other CollX users using platform circumvention techniques. Some CollX users reported being harassed via chat with spammy messages or NSFW content.
When apps fail to moderate in-app chat, they risk community safety, their reputation, and sometimes even the threat of legal recourse. Dangerous dynamics between app users can emerge if chat is not closely moderated. CollX's stellar performance metrics, like user retention, might have decreased, too.
The time and effort it takes to reinvent your app's image and relationships with users, investors, and advertisers are often far greater than the cost of a more sophisticated moderation tool.
Alexander Liriano, Director of Operations at CollX, manually reviewed every flagged instance of suspicious, harmful, and off-topic content and took appropriate action against those who violated the community guidelines.
Liriano says, "When I first joined CollX, I realized we didn't have any process in place to prevent users from misusing the app. We had a blocklist of words, but it couldn't consider the context of their use in conversation." If Liriano wanted to protect the CollX community proactively, he would need the assistance of an advanced AI-powered solution to lighten the administrative moderation load.
Integrating & Leveraging Stream's Auto Moderation Solution
After Liriano proved the need for a more advanced moderation solution to CollX co-founder Ted Mann, he connected with Adnan Al-Khatib, the Principal Product Manager of Moderation at Stream.
Liriano worked closely with Stream to implement Auto Moderation saying, "Adnan was always quick to answer my questions and respond to meeting requests---even while on PTO! He is my go-to guy. Adnan is great to work with and will always find a solution to overcome roadblocks you may encounter."
Because Auto Moderation comes ready to deploy with no additional coding or integration work required, CollX was able to expedite its new and improved moderation methods. The CollX development team saved time and engineering resources because of this and was able to invest in additional roadmap areas at maximum velocity.
Liriano bolstered CollX's existing blocklist strategy with Auto Moderation's platform circumvention, semantic filter, real-time behavioral nudge moderation, and commercial spam prevention tools. "Before using Stream's Auto Moderation, I would spend hours reviewing, moderating, banning, and blocking bad actors and inappropriate content," says Liriano.
But Stream's moderator-centric features enabled Liriano to spend a fraction of his valuable time tracking the violation patterns and impact of malicious users, auto-flag harmful content, and blocking bot traffic and commercial spam, all while taking user intent and the contextual meaning of flagged words or phrases into account to prevent blocking or banning action from being taken against good-intending users who might casually use profanity while messaging.
The Impact of Auto Moderation on CollX
Before implementing Auto Moderation, Liriano recalls users flooding his inbox with reports of spam, scam, and harassment. He says he hasn't received a single email complaint in months.
Liriano says, "We've seen a 90% reduction in phishing attempts and harmful content on CollX, thanks to Auto Moderation. It gives us the power to take a proactive approach to moderation and will allow us to scale more efficiently. The less time I have to spend babysitting scammers, the more I can dedicate to other important business areas."
Liriano clarifies that while the reduction is impressive, there will always be malicious users who will attempt to scam and spam CollX users over in-app chat. The difference now is that he can take a proactive approach to moderating the platform with help from Auto Moderation, adding, "I like the fact that you can stop dangerous content in its tracks before any harm occurs."
Stream created Auto Moderation with Trust & Safety teams, moderators, and community managers in mind. We aim to enable human moderators to scale their efforts effectively without clogging bandwidth with repetitive tasks. Auto Moderation is compatible with every app and can automatically discover new varieties of harmful content as they emerge—even before users report them.
Contact our team if you'd like to start identifying harmful behaviors in real time.