Insider Q&A: Trust and safety exec talks about AI and content moderation
Alex Popken was a longtime trust and safety executive at Twitter focusing on content moderation before leaving in 2023. Now, she’s vice president of trust and safety at WebPurify, a content moderation service provider that works with businesses to help ensure the content people post on their sites follows the rules. For example, Russian interference in the 2016 U.S. presidential election, where we realized for the first time, realized in a meaningful way, that without content moderation we can have bad actors undermining democracy. And so as a content moderation team, you’re trying to stay one step ahead and anticipate new risks. Imagine a world in which AI is an important tool in the tool belt of content moderation, for things like threat intelligence.
Discover Related

Social-media companies decide content moderation is trending down

Learning caution in the age of AI

Musk Warns of 10-20% Chance of AI Going Bad, Calls for Global Collaboration

Twitter commits to safety amidst content concerns

Twitter loses its head of trust and safety amid ongoing worries about dangerous content

Twitter’s Former Safety Head Forced Into Hiding After Threats: Reports

Twitter not safer under CEO Elon Musk says former head of trust and safety

Twitter says 50% of staff laid off, assures content moderation unchanged

Musk plans to form ‘content moderation council’ for Twitter
