Facebook denies report that its AI fails to detect hate speech
Facebook has denied a report that claimed its AI technology used to detect hate speech or violence has little impact. Facebook’s post follows a Wall Street Journal report that said Facebook’s AI cannot consistently identify first-person shooting videos, racist rants and even the difference between cockfighting and car crashes. “Recent reporting suggests that our approach to addressing hate speech is much narrower than it actually is, ignoring the fact that hate speech prevalence has dropped to 0.05%, or 5 views per every 10,000 on Facebook,” Guy Rosen, VP of Integrity said in a blog post. Rosen argued that using technology to remove hate speech is only one way to counter hate speech, and if Facebook is not confident that the content meets the bar for removal, the platform may reduce the content’s distribution or won’t recommend Groups, Pages or people that regularly post content that is likely to violate our policies.


Facebook Continues Major Crackdown On Hate Content, Removes 3.15 Million Posts



Facebook offers up first-ever estimate of hate speech prevalence on its platform

Facebook Says It's Doing A Better Job Of Catching Hate Speech Before Users See It





Discover Related

Whistle-blower fallout: Facebook spotlights tools that detect hate speech
