Inside Facebook, YouTube and Twitter’s Struggle to Purge Video of the New Zealand Mosque Attacks

Available to WrapPRO members

Facebook’s AI technology can spot blood and gore from the livestream, enabling Facebook to immediately remove uploads showing the most gruesome footage, a rep tells TheWrap

new zealand mosque
Getty Images

The daunting, near-impossible challenge of purging violent attacks from major tech platforms like Facebook, Twitter and YouTube was made clear on Friday in the minutes and hours after the horrifying mass shootings at two New Zealand mosques that killed 49 people and left dozens of others wounded.

The attack, and others like it, isn’t something that can be proactively blocked from social media. Instead, the shooting, which the shooting suspect livestreamed using Facebook Live and then recirculated on other platforms, is something that forces human moderators and artificial intelligence tools to act quickly to block. But the reaction isn’t instantaneous or flawless.

The responses of Twitter, Facebook and YouTube underscore the whack-a-mole nature of policing massive social media platforms — with massive audiences. Facebook has more than 2 billion users. YouTube pulls in more than 1 billion views each day and has hundreds of hours of content uploaded each minute. Twitter has 321 million users.

Entirely eradicating the video is immensely difficult, if not impossible. While the platforms are busy deleting posts, some users are working to share the attack. It’s the digital equivalent of capping a busted fire hydrant.

The attacker, a 28-year-old Australian named Brenton Tarrant, may have known this would be the case. By livestreaming the bloodshed as he entered a mosque in Christchurch, New Zealand, and opened fire, he was able to maximize the anguish while allowing a faction of the internet to pick up where he left off, re-sharing the attack and thereby searing it into the memory of anyone that watches it.

And as one person familiar with Facebook’s review process told TheWrap, taking the draconian measure of banning new uploads that mention keywords like “mosque” or “shooting” isn’t feasible. Not only would it fail to block all uploads of the attack video, it would stifle innocuous reports and commentary on the tragedy.

The livestream, which captured the anguished cries pierce the brief moments between gun shots, remained up for nearly 20 minutes before Facebook was alerted by New Zealand police officers that the attack was being broadcast.

Facebook spokesperson Mia Garlick said the company then “immediately removed” the livestream and deleted both the suspected shooter’s Facebook and Instagram accounts. Facebook is also “removing any praise or support for the crime and the shooter or shooters,” Garlick said, adding that the social media giant is continuing to work with police on its investigation of the case.

TheWrap has been unable to find video of the livestreamed attack on Facebook on Friday. That’s largely due to the measures Facebook took after removing the livestream — which include producing a scan of the video that allowed the company to detect new uploads that include the same scenes as the livestream.

According to one Facebook rep, the company’s AI technology is also able to spot blood and gore from the livestream — enabling Facebook to immediately remove uploads showing the most gruesome aspects of the livestream while allowing news reports on the massacre (which are unlikely to include graphic footage). The company is actively looking for links to the livestream on other sites, then alerting those sites to take the video down, the rep added.

But the recorded shooting continues to linger on other platforms, despite a unified push to remove it.

A person familiar with Twitter’s review process said the company is using a combination of AI and an international team of moderators to scan the platform and remove the offending video. This process hasn’t been foolproof, however; several tweets sharing video of the attack have remained easily findable on Twitter on Friday, with many of the tweets remaining up for hours at a time. An individual familiar with Twitter’s moderation team said the company strongly encourages users to flag tweets sharing the video so that its moderators can more quickly remove the clips. (Sharing the video violates Twitter’s rule against “glorification of violence.”)

Twitter declined to share how many tweets of the video it has removed.

YouTube, the biggest video hub on the planet, has run into similar issues keeping the video off its site. A person familiar with YouTube’s response said it has removed thousands of uploads of the shooting on Friday. News reports showing segments of the attack will not be removed from YouTube, as the company allows exceptions to its ban on graphic content if it contains news value. But the company, like Facebook and Twitter, is leaning on machine-learning tools and its human moderation team to remove uploads of the raw video.

Even after thousands of uploads have been removed, a constant stream of new uploads are added each hour by users looking to skirt YouTube’s enforcement mechanisms. Most are quickly caught and taken down, as a YouTube search of the last hour of uploads showed, but some slip through the cracks and allow viewers to watch the attack.

Right now, these tech giants are left with an imperfect solution — a reliance on flawed technology and moderators with a finite amount of time and energy —  to the problem of completely blocking video of violent attacks from spreading.

Jon Levine contributed to this report. 

Comments