New Zealand, Facebook, and the far-right: How social media made extremism viral

One of the most talked-about aspects of the Christchurch Mosque massacre in New Zealand last Friday, March 15, is the fact that the 28-year-old shooter’s attack, which left 50 Muslims including 8 Indians dead, began and ended online.

The shooter uploaded a 74-page manifesto espousing white supremacy that was full of memes (on Twitter and 8chan); in that, he identified himself as a white nationalist out to avenge Muslims’ attacks in Europe, while streaming the carnage live on Facebook as he opened fire on the worshippers.

Major web platforms, including social media and video-sharing sites, have faced renewed criticism for being slow to respond to the first-ever live-streamed mass shooting as they scrambled to take down the 17-minute video.

The devastating attack and its online nature has triggered some necessary questions, yet again: has social media gone too far in aiding and abetting white supremacist terrorism? Has the internet abyss become a breeding ground for the far-right to carry out its extremist agenda? What’s being done and what do more can tech giants do to block and ban reported content? Has the freedom of speech gone too far or has the far-right monopolised it?

White-supremacy check and content moderation

How else is it earthly possible that the shooter, later identified as Australian citizen Brenton Harrison Tarrant, live-streamed his shooting spree via a body-mounted camera?

He allegedly posted a link to the 17-minute-long stream immediately on 8Chan, an imageboard site known for a being a hotbed of far-right extremist views, but more on that later.

The live video was reportedly viewed less than 200 times on Facebook, while it had at least 4,000 views after the live-stream ended and before it was taken down. The social networking site, later, also took down the gunman’s Facebook and Instagram accounts.

In a blog published Monday, Facebook claimed that no users reported the video of the New Zealand mosque shootings while it was still live.

This is where it gets a little tricky: A reporter for Right Wing Watch insists he was alerted to the live video and raised the alarm immediately. If that is true, there is ostensibly no reason for Facebook to not take the warning seriously and sit this one out, unless there is.

A response with a difference

Many have condemned the lack of backstops on Facebook to curb hate speech and movements motivated at inflicting violence on minority communities.

Last year, our report on the diversity problem in/on Facebook touched upon its role in proliferating hate speech against African-American communities and suspending their support networks on outlandish grounds all the time. Far-right accounts are rarely held against such high standards.

Very recently, Facebook briefly banned Modi-critic Dhruv Rathee‘s account just weeks before the Lok Sabha polls for warning people against fascism in one of his posts, raising many eyebrows.

Delete Facebook?

In the wake of the New Zealand attack, Facebook shares toppled and executives quit over the backlash. The video sparked outrage and concern all over the world; AirAsia group CEO Tony Fernandes quit the platform to protest this oversight on Facebook’s part.

In a Twitter post, Fernandes said he deactivated his Facebook account due to so much “hate that goes on social media”, adding that big tech giants can do so much more to end this cycle of violence.

Brian Acton, co-founder of WhatsApp, who left Facebook in 2017 due to misaligned viewpoints over encryption and monetisation of the chat app, sounded off the ‘Delete Facebook’ campaign once again, this time in his speech at Stanford University.

Acton asked the students to get rid of Facebook, criticising tech companies for failing to deliver when it comes to data privacy, social responsibility, and ethics.

Despite banning several accounts and suspending pages, the far-right has been able to circumvent Facebook’s recent measures aimed at preventing abuse of the platform by extremists. They continue to fund-raise, mobilise support, target, and recruit extremists to endorse their xenophobic agenda, owing to Facebook’s leniency.

A case in point

Even in Europe where right-wing sentiments are slowly infiltrating populist governments and where cyber data protection laws are the strongest, especially in the wake of Cambridge Analytica, the social network took its time before permanently banning prominent rabble-rousers, like UK far-right activist Stephen Yaxley-Lennon.

With 1 million followers on Facebook and going by the moniker ‘Tommy Robinson‘, he had previously co-founded the Islamophobic far-right pressure group, the English Defence League. Twitter suspended him first for claiming “Islam promotes killing people” in March.

Before the ban, a hidden global network of US think tanks, right-wing Australians and Russian trolls were found providing him with financial, political, and moral support, using a Facebook ‘donate tool’, which was meant to be reserved for charities alone.

Even after it was brought to Facebook’s notice, and the site disabled it, supporters visiting Robinson’s Facebook profile directed them to his website where they could make donations through a form.

What’s more alarming is that despite the ban for repeatedly breaching its community standards on hate speech, Yaxley-Lennon was nonetheless able to use its platform to live-stream his harassment of an anti-fascist blogger, whom he doorstepped and doxxed just a week after he was banned.

He used his friend’s Facebook account and Instagram (with a video of the link later posted to YouTube) to carry this out.

Fundamental flaws

Cross-platform ban of personal accounts affiliated to such movements is of imminent importance, as is tighter regulation when it comes to fundraising on the platform. Despite vowing to crack down on extremists, Facebook has allowed far-right group Britain First and its anti-Muslim leader Paul Golding to set up new pages and pay for adverts.

Speaking of uploading such graphic videos, which clearly violates their portal’s standards, YouTube’s Neal Mohan said the first-person viewpoint made for an unusual technical challenge for computers untrained to detect videos from that perspective.

Meanwhile, New Zealand government and companies alike have challenged Facebook and other platform owners to immediately take steps to effectively moderate hate content before another tragedy is similarly streamed online. Prime Minister Jacinda Ardern said she has been in contact with Facebook COO Sheryl Sandberg to ensure the video is entirely scrubbed from the platform.

What about 8chan?

Journalist Robert Evans told NPR that 8chan “is essentially the darkest, dankest corner of the Internet. It is basically a neo-Nazi gathering place. And its primary purpose is to radicalise more people into eventual acts of violent, far-right terror”.

You may recall reading stray reports about the New Zealand shooter grinning and flashing the infamous neo-Nazi symbol in court, or this gunman’s ties with the Toronto van attacker who shared links to the Reddit incel culture. Both these fads originated in and were popularised by the 4chan/8chan communities found in the darkest reaches of the internet. 8chan was also booted from Google’s Search listings over problems with child pornography.

According to the Guardian, these forums have the “ability to provide a sense of social connectedness among like-minded individuals, their legitimisation of otherwise repugnant beliefs, and their tendency to become ‘echo chambers’, places where contrary views are not expressed in any form”. But this is also happening front and centre on easily accessible platforms,  like Facebook and YouTube.

8chan’s security provider Cloudfare, which helps keep the site secure, explained to Forbes that if they took away support from 8chan, it wouldn’t do much to actually get it off of the internet. “We’re the Fedex of the internet, passing messages on, not looking inside the boxes,” Alissa Starzak, Cloudflare’s head of policy, said.

Why it matters and what’s next

All these sites are essentially a section of the open web that netizens generally consider “good” because it does away with traditional gatekeepers. At the same time, expecting tech companies to self-police content is a futile venture.

Now, with the faith in mainstream social media weakening, international governments are calling for Facebook to be regulated the same way as the telecommunication and broadband industry.

The internet’s role in circumventing censorship and offering netizens freedom of speech has turned out to be a double-edged sword. What worsens the problem is the fact that most of these sites are out to monetise their user base, making it unprofitable to take a stand against harmful populism or banning prolific members. In fact, 8chan’s “free speech” rules were basically “just a ploy to get as many users as possible”.

As for new laws to regulate sites like Facebook and 8chan, countrywide blocks would be an extreme measure that would give either ISPs or governments a huge amount of power over the internet.

But you know ethics in the tech world has taken a backseat when Zuckerberg won’t even appear before the British parliamentary “fake news” committee in the UK.

So if boycotting these platforms is really the only way out, how willing are we to sacrifice that?


Prarthana Mitra is a Staff Writer at Qrius

extremismFacebookNew Zealand