Facebook fails again to detect hate speech in ads
SAN FRANCISCO — The test couldn’t have been much easier — and Facebook still failed. Facebook and its parent company Meta flopped once again in a test of how well they could detect obviously violent hate speech in The hateful messages focused on Ethiopia, where internal documents obtained by whistleblower Frances Haugen showed that Facebook’s ineffective moderation is “literally fanning ethnic violence,” as she said in her 2021 congressional testimony. The group created 12 text-based ads that used dehumanizing hate speech to call for the murder of people belonging to each of Ethiopia’s three main ethnic groups — the Amhara, the Oromo and the Tigrayans. In November, Meta said it removed a post by Ethiopia’s prime minister that urged citizens to rise up and “bury” rival Tigray forces who threatened the country’s capital. “When ads calling for genocide in Ethiopia repeatedly get through Facebook’s net — even after the issue is flagged with Facebook — there’s only one possible conclusion: there’s nobody home,” said Rosa Curling, director of Foxglove, a London-based legal nonprofit that partnered with Global Witness in its investigation.



Discover Related

TikTok, Facebook OK'd Ads With Misinformation About Voting: Report

Strike four: Facebook misses election misinfo in Brazil ads

Rohingya sue Facebook for $150bn for fueling Myanmar hate speech

Hate speech in Myanmar continues to thrive on Facebook

Hate speech in Myanmar continues to thrive on Facebook

Hate speech in Myanmar continues to thrive on Facebook

Anti-Myanmar hate speech flares in Thailand over spread of COVID

Myanmar polls: Facebook under scrutiny over hate speech

Facebook admits it was 'too slow' to solve the hate speech issue in Myanmar

Facebook ‘too slow’ in removing anti-Rohingya hate speech
