Internet companies on Friday scrambled to remove graphic video filmed by a gunman in the New Zealand mosque shootings that was widely available on social media for hours after the horrific attack.
Facebook said it took down a livestream of the shootings and removed the shooter’s Facebook and Instagram accounts after being alerted by police.
At least 49 people were killed at two mosques in Christchurch, New Zealand’s third-largest city.
Using what appeared to be a helmet-mounted camera, the gunman livestreamed in horrifying detail 17 minutes of the attack on worshipers at the Al Noor Mosque, where at least 41 people died.
Several more worshipers were killed at a second mosque a short time later.
The shooter also left a 74-page manifesto that he posted on social media under the name Brenton Tarrant, identifying himself as a 28-year-old Australian and white nationalist who was out to avenge attacks in Europe perpetrated by Muslims.
“Our hearts go out to the victims, their families and the community affected by this horrendous act,” Facebook New Zealand spokeswoman Mia Garlick said in a statement.
Facebook is “removing any praise or support for the crime and the shooter or shooters as soon as we’re aware,” she said. “We will continue working directly with New Zealand Police as their response and investigation continues.”
Twitter and YouTube owner Google also said they were working to remove the footage from their sites.
The furor highlighted once again the speed at which graphic and disturbing content from a tragedy can spread around the world, and how Silicon Valley tech giants are still grappling with how to prevent that from happening.
British tabloid newspapers, such as the Daily Mail and the Sun, posted screenshots and video snippets on their Web sites. One journalist tweeted that several people sent her the video via the Facebook-owned WhatsApp messaging app.
New Zealand police urged people not to share the footage.
Many Internet users called for tech companies and news sites to take the material down.
The video’s spread underscores the challenge for Facebook even after stepping up efforts to keep inappropriate and violent content off its platform.
In 2017 it said it would hire 3,000 people to review videos and other posts, on top of the 4,500 people Facebook already tasks with identifying criminal and other questionable material for removal.
However, that is just a drop in the bucket of what is needed to police the social media platform, said Siva Vaidhyanathan, author of Antisocial Media: How Facebook Disconnects Us and Undermines Democracy.
If Facebook wanted to monitor every livestream to prevent disturbing content from making it out in the first place, “they would have to hire millions of people,” something it is not willing to do, said Vaidhyanathan, who teaches media studies at the University of Virginia.
“We have certain companies that have built systems that have inadvertently served the cause of violent hatred around the world,” Vaidhyanathan said.
Facebook and YouTube were designed to share pictures of babies, puppies and other wholesome things, he said.
However, “they were expanded at such a scale and built with no safeguards such that they were easy to hijack by the worst elements of humanity,” he added.
With billions of users, Facebook and YouTube are “ungovernable” at this point, said Vaidhyanathan, who called Facebook’s livestreaming service a “profoundly stupid idea.”