Researchers have warned tech companies for years that online extremism and radicalization results in real-world violence. On Friday, those warnings appeared prophetic again when a shooter with a history of social media radicalism entered two mosques in Christchurch, New Zealand, and killed 49 people.
An online manifesto apparently connected to the accused shooter listed a variety of online influences related to the crime, including “the internet” itself.
“Much like a lot of researchers and journalists on this beat, I’m yo-yo-ing between hopeless and furious,” said Becca Lewis, a researcher with the technology research nonprofit Data + Society. “It’s not gratifying to be right in this situation.”
Researchers told NBC News that they had raised concerns about online extremism both in conversations and in published research papers, but said their warnings and ideas to help prevent online radicalization have been largely ignored. Lewis published a report in September that detailed how YouTube influencers and far-right extremists gamed YouTube’s algorithm to push radicalization messages and turn a profit.
Lewis and other online extremism researchers are now hoping the shooting could be a wake-up call to companies like Facebook and YouTube, which they hope will be more transparent and proactive in scuttling white supremacist and extremist content.
Lewis said, however, that she is not particularly optimistic.
“Where I get pessimistic about it is that these problems didn’t start with the tech companies,” Lewis said. “They’ve just been very profitable for them.”
Facebook and YouTube did not immediately respond to a request for comment.
The manifesto, which frequently mentioned far-right influencers and groups on YouTube and other platforms, was posted next to a link to a Facebook page that live-streamed the murders. On the Facebook page and a separate Twitter account linked to the shooter, links to anti-immigrant YouTube videos from both white nationalist YouTube channels and state-funded news operations like Russia Today filled the timeline.
The pattern of social media and message board posts echoed other mass killings in recent years that have been linked to online extremism. Whitney Phillips, an assistant professor of communication at Syracuse University who studies the effects of online trolling on mainstream culture, said the social media postings tied to the alleged shooter were all too familiar.
“This case is heartbreaking and disgusting and viscerally repulsive and it’s not surprising. That’s what’s so upsetting,” Phillips said. “There are so many ways in which social media platforms facilitate, embolden and incentivize all kinds of bigoted expression.”
The emergence of internet-native extremism is a relatively recent phenomenon, predated by real-world extremism that created initial problems for tech companies such as Google, which dedicated significant resources to eliminating ISIS propaganda.
Phillips said that the most toxic parts of the internet grew out of a digital culture of trolling that had at one time seemed mischievous but mostly innocuous. That changed dramatically in the past several years, as memes and provocateurs on social media began to pervade both pop culture and politics. Millions of dollars were poured into propping up meme-based political content and advertisements, both from U.S. political campaigns and lobbying organizations as well as shadowy foreign influence campaigns seeking to sow division and amp up racist rhetoric.
What emerged was “a soup of toxicity online” that maintained a veneer of innocence, Phillips said.
“A lot of the stuff that passed as fun, the media manipulation strategies that were part of ‘fun trolling’ in the early days, a lot of that established a behavioral blueprint and also created a kind of umbrella that people could hide under,” Phillips said.
While it’s not immediately clear how the shooter became radicalized, the manifesto revealed that he is apparently well-versed in the language and culture of the fringe internet. The 74-page document is peppered with memes and inside jokes meant as a wink to the users on the politics portion of the 8chan message board where he first posted his intention.
The references also serve to deceive outsiders and journalists, acting as trap in an attempt to trick them into making assumptions or mistakes. In the live-streamed video, the shooter looks into the camera and says “subscribe to Pewdiepie,” a running internet joke about a campaign centered on a popular YouTube star. The star, whose given name is Felix Kjellberg, also became something of a martyr in some corners of the internet to fans who said they believed the media’s coverage of his use of Nazi imagery was unfair.
Researchers described these efforts as “bait,” urging journalists not to share the document or the video, and requesting that sharing sites like YouTube not make it available for viewing.
Joan Donovan, director of the Technology and Social Change Research Project at Harvard University’s Shorenstein Center, warned that the online postings showed all the hallmarks of a seasoned internet troll who had planned a media-ready spectacle.
“His social media is all pre-packaged for journalist’s consumption,” Donovan said. “And it’s spread across platforms so it’s impossible to really moderate or mitigate.”
Donovan also warned against focusing on the online postings in lieu of the bigger problems posed by digital platforms.
“There’s no kernel of truth here,” she said. “There’s nothing to get below the surface of. We have someone who was obviously just a vicious racist.”
She added: “If the platforms aren’t going to be dedicated to removing this stuff, then we aren’t going to have the internet we want, we’re going to have the internet we deserve.”
CORRECTION (March 15, 2019, 8:43 p.m. ET) A previous version of this article misstated where the shootings took place. It was two mosques in Christchurch, not one.