Opinion | When horror goes live on the web and fools fail the world
Live MintLast Friday’s horrific shootings at two New Zealand mosques yet again highlighted Big Tech’s inability to manage its spawn. The media outfit Motherboard, which ran coverage on Facebook’s handling of the problem at Motherboard.vice.com when it surfaced, has this to say: “Like any content on Facebook, be those posts, photos or pre-recorded videos, users can report live broadcasts that they believe contain violence, hate speech, harassment or other terms of service violating behaviour. After this, content moderators will review the report and make a decision on what to do with the live stream.” According to an internal training document for Facebook content moderators obtained by Motherboard, moderators can “snooze” a Facebook Live stream, meaning it will resurface every 5 minutes for moderators to check again if anything has developed. Facebook, YouTube and others have carved out specific exceptions for news organizations, which means that the same video clip that is shut down on a poster’s account when it is used for hate speech is allowed to run in a news report by a TV channel that uses social media to extend its reach. According to Google’s transparency site on YouTube videos, approximately 6.2 million videos were taken down by automated programmes out of a total 8.7 million that were deleted last quarter.