2 years, 10 months ago

Facebook fails again to detect hate speech in ads

SAN FRANCISCO — The test couldn’t have been much easier — and Facebook still failed. Facebook and its parent company Meta flopped once again in a test of how well they could detect obviously violent hate speech in The hateful messages focused on Ethiopia, where internal documents obtained by whistleblower Frances Haugen showed that Facebook’s ineffective moderation is “literally fanning ethnic violence,” as she said in her 2021 congressional testimony. The group created 12 text-based ads that used dehumanizing hate speech to call for the murder of people belonging to each of Ethiopia’s three main ethnic groups — the Amhara, the Oromo and the Tigrayans. In November, Meta said it removed a post by Ethiopia’s prime minister that urged citizens to rise up and “bury” rival Tigray forces who threatened the country’s capital. “When ads calling for genocide in Ethiopia repeatedly get through Facebook’s net — even after the issue is flagged with Facebook — there’s only one possible conclusion: there’s nobody home,” said Rosa Curling, director of Foxglove, a London-based legal nonprofit that partnered with Global Witness in its investigation.

Discover Related