Stream’s Principal Moderation PM Talks Trust & Safety On GDI Podcast

2 min read
Kimmy L.
Kimmy L.
Published May 19, 2023

Adnan Al-Khatib, Principal Product Manager of Moderation at Stream, recently joined the GDI podcast during Trust and Safety Month to discuss content moderation.

Stream offers a cutting-edge, end-to-end auto-moderation solution that enables real-time adaptability to your app's needs. This powerful tool protects users from bots, scammers, and bad actors.

Let's look closely at some key points discussed during the conversation.

Why are content moderation and safety so critical to social platforms?

When it comes to social platforms, content moderation and safety go hand in hand. Neglecting to moderate your platform can result in declining user engagement and retention. Users expect a communication platform to be safe. Your opportunity to gain users' trust as an app provider is very short.

Content moderation significantly impacts user satisfaction, platform growth, and engagement. By promptly identifying and addressing harmful content, platforms can create a positive user experience, a loyal audience, and maintain a thriving community.

What technologies are necessary to keep user experience as positive as possible?

Stream's Automated Moderation API utilizes AI technology but leaves room for human moderation efforts to create a more holistic tool. This allows your app to reap the unique benefits of each moderation method.  While AI can identify harmful content at scale, humans can better read contextual clues within flagged content.

How does Stream work to understand user intent, and what are some practical applications of this technology for a platform?

Stream's Auto Moderation engine focuses on the meaning and intent of potentially inappropriate content rather than specific keywords or phrases. Leveraging natural language and modern AI technologies, Stream moderates user behavior, catching alarming content that engines solely focused on keywords would miss.  This approach is not only more effective, but it also powers an easier and more intelligent workflow.

What is considered best practice when implementing a new moderation tool?

There are two aspects to consider when answering this question; technology and people. In reference to technology, it's a question of integration, the tooling that will be integrated into the product.

Stream's Auto Moderation engine does not require integration. No development work is required to enable it. Your Trust and Safety team can simply plug the third-party moderation solution into their application from Stream's website.

Different platforms have different requirements and policies. Stream is built to be as flexible as possible, allowing apps to be as specific and customized as needed. Stream is also building an API so developers can fully build the solution.

Regarding people-powered moderation teams, they can only review a certain number of messages a day. Introducing automation to this process reduces the repetition inherent in manual moderation. This frees moderators to focus on the judgment piece, which only needs to be revisited with novel or new instances.

Are there any challenges you predict will become more prevalent?

One prominent challenge facing the industry is the detection of replicants---bots that mimic human behavior. As these bots become increasingly sophisticated, differentiating them from genuine users will become more difficult. Adnan emphasizes the importance of addressing this issue, particularly by focusing on intent and behavior, as it remains Stream's long-term endeavor.

Stream is committed to Trust and Safety, leading at the forefront of AI NLP technologies. To experience the capabilities of Stream's Auto Moderation tool firsthand, we invite you to sign up for a trial to start moderating chat.

decorative lines
Integrating Video With Your App?
We've built an audio and video solution just for you. Launch in days with our new APIs & SDKs!
Check out the BETA!