Researchers say YouTube’s policies and algorithms are still too opaque.

123 with 69 posters participating
YouTube, Facebook, and other social media platforms were instrumental in radicalizing the terrorist who killed 51 worshippers in a March 2019 attack on two New Zealand mosques, according to a new report from the countrys government. Online radicalization experts speaking with WIRED say that, while platforms have cracked down on extremist content since then, the fundamental business models behind top social media sites still play a role in online radicalization.
According to the report, released this week, the terrorist regularly watched extremist content online and donated to organizations like The Daily Stormer, a white supremacist site, and Stefan Molyneuxs far-right Freedomain Radio. He also gave directly to Austrian far-right activist Martin Sellner. The individual claimed that he was not a frequent commenter on extreme right-wing sites and that YouTube was, for him, a far more significant source of information and inspiration, the report says.
The terrorists interest in far-right YouTubers and edgy forums like 8chan is not a revelation. But until now, the details of his involvement with these online far-right organizations were not public. Over a year later, YouTube and other platforms have taken steps toward accepting responsibility for white supremacist content that propagates on their websites, including removing popular content creators and hiring thousands more moderators. Yet according to experts, until social media companies open the lid on their black-box policies and even algorithms, white supremacist propaganda will always be a few clicks away.
“The problem goes far deeper than the identification and removal of pieces of problematic content,” said a New Zealand government spokesperson over email. “The same algorithms that keep people tuned to the platform and consuming advertising can also promote harmful content once individuals have shown an interest.”
Entirely unexceptional
The Christchurch attackers pathway to radicalization was entirely unexceptional, say three experts speaking with WIRED who had reviewed the government report. He came from a broken home and from a young age was exposed to domestic violence, sickness, and suicide. He had unsupervised access to a computer, where he played online games and, at age 14, discovered the online forum 4chan. The report details how he expressed racist ideas at his school, and he was twice called in to speak with its anti-racism contact officer regarding anti-Semitism. The report describes him as somebody with limited personal engagement, which left considerable scope for influence from extreme right-wing material, which he found on the internet and in books. Aside from a couple of years working as a personal trainer, he had no consistent employment.
Advertisement
The terrorists mother told the Australian Federal Police that her concerns grew in early 2017. She remembered him talking about how the Western world was coming to an end because Muslim migrants were coming back into Europe and would out-breed Europeans, the report says. The terrorists friends and family provided narratives of his radicalization that are supported by his Internet activity: shared links, donations, comments. While he was not a frequent poster on right-wing sites, he spent ample time in the extremist corners of YouTube.
A damning 2018 report by Stanford researcher and PhD candidate Becca Lewis describes the alternative media system on YouTube that fed young viewers far-right propaganda. This network of channels, which range from mainstream conservatives and libertarians to overt white nationalists, collaborated with each other, funneling viewers into increasingly extreme content streams. She points to Stefan Molyneux as an example. Hes been shown time and time again to be an important vector point for peoples radicalization, she says. He claimed there were scientific differences between the races and promoted debunked pseudoscience. But because he wasnt a self-identified or overt neo-Nazi, he became embraced by more mainstream people with more mainstream platforms. YouTube removed Molyneuxs channel in June of this year.
This step-ladder of amplification is in part a byproduct of the business model for YouTube creators, says Lewis. Revenue is directly tied to viewership, and exposure is currency. While these networks of creators played off each others fan bases, the drive to gain more viewers also incentivized them to post increasingly inflammatory and incendiary content. One of the most disturbing things I found was not only evidence that audiences were getting radicalized, but also data that literally showed creators getting more radical in their content over time, she says.
Making significant progress?
In an email statement, a YouTube spokesperson says that the company has made significant progress in our work to combat hate speech on YouTube since the tragic attack at Christchurch. Citing 2019s strengthened hate speech policy, the spokesperson says that there has been a 5x spike in the number of hate videos removed from YouTube. YouTube has also altered its recommendation system to limit the spread of borderline content.
YouTube says that of the 1.8 million channels terminated for violating its policies last quarter, 54,000 were for hate speechthe most ever. YouTube also removed more than 9,000 channels and 200,000 videos for violating rules against promoting violent extremism. In addition to Molyneux, YouTubes June bans included David Duke and Richard Spencer. (The Christchurch terrorist donated to the National Policy Institute, which Spencer runs.) For its part, Facebook says it has banned over 250 white supremacist groups from its platforms and strengthened its dangerous individuals and groups policy.
Its clear that the core of the business model has an impact on allowing this content to grow and thrive, says Lewis. Theyve tweaked their algorithm, theyve kicked some people off the platform, but they havent addressed that underlying issue.
Advertisement
Online culture does not begin and end with YouTube or anywhere else, by design. Fundamental to the social media business model is cross-platform sharing. YouTube isnt just a place where people go for entertainment; they get sucked into these communities. Those allow you to participate via comment, sure, but also by making donations and boosting the content in other places, says Joan Donovan, research director of Harvard Universitys Shorenstein Center on Media, Politics, and Public Policy. According to the New Zealand governments report, the Christchurch terrorist regularly shared far-right Reddit posts, Wikipedia pages, and YouTube videos, including in an unnamed gaming site chat.
Fitting in
The Christchurch mosque terrorist also followed and posted on several white nationalist Facebook groups, sometimes making threatening comments about immigrants and minorities. According to the report authors who interviewed him, the individual did not accept that his comments would have been of concern to counter-terrorism agencies. He thought this because of the very large number of similar comments that can be found on the internet. (At the same time, he did take steps to minimize his digital footprint, including deleting emails and removing his computers hard drive.)
Reposting or proselytizing white supremacist without context or warning, says Donovan, paves a frictionless road for the spread of fringe ideas. We have to look at how these platforms provide the capacity for broadcast and for scale that, unfortunately, have now started to serve negative ends, she says.
YouTubes business incentives inevitably stymie that sort of transparency. There arent great ways for outside experts to assess or compare techniques for minimizing the spread of extremism cross-platform. They often must rely instead on reports put out by the businesses about their own platforms. Daniel Kelley, associate director of the Anti-Defamation Leagues Center for Technology and Society, says that while YouTube reports an increase in extremist-content takedowns, the measure doesnt speak to its past or current prevalence. Researchers outside the company dont know how the recommendation algorithm worked before, how it changed, how it works now, and what the effect is. And they dont know how borderline content is definedan important point considering that many argue it continues to be prevalent across YouTube, Facebook, and elsewhere.
Questionable results
Its hard to say whether their effort has paid off, says Kelley. We dont have any information on whether its really working or not. The ADL has consulted with YouTube, however Kelley says he hasnt seen any documents on how it defines extremism or trains content moderators on it.
A real reckoning over the spread of extremist content has incentivized big tech to put big money on finding solutions. Throwing moderation at the problem appears effective. How many banned YouTubers have withered away in obscurity? But moderation doesnt address the ways in which the foundations of social media as a businesscreating influencers, cross-platform sharing, and black-box policiesare also integral factors in perpetuating hate online.
Many of the YouTube links the Christchurch shooter shared have been removed for breaching YouTubes moderation policies. The networks of people and ideologies engineered through them and through other social media persist.
This story originally appeared on wired.com.