Big tech and social media bosses pledged to eradicate online extremism in Christchurch Call three years ago in Paris - and today they claim atrocities which echo the 15 March massacre don't go viral online anymore. But their algorithms which amplify extremism and radicalise people in the first place still operate unchecked.
“The Christchurch Call is a commitment by governments and tech companies to eliminate terrorist and violent extremist content online.”
When Prime Minister Jacinda Ardern, other presidents and prime ministers and the bosses of the big tech and social media companies all signed up to that bold goal in Paris, it was a direct response to the way social media was weaponised for the atrocity in Christchurch on 15 March 2019.
But last weekend there were disturbing echoes of that when a teenage white supremacist shot and killed 10 people in Buffalo in the US, including a so-called manifesto posted online citing it as a personal inspiration.
That reportedly included entire passages from the paranoid one that was released by Brenton Tarrant three years ago, regurgitating tenets of the thoroughly racist and conspiracy laden great replacement theory.
Acting Chief Censor Rupert Ablett-Hampson acted swiftly to declare that new document objectionable this week, making it illegal to possess or share in New Zealand - just as his predecessor David Shanks did in 2019.
But in another ugly echo of 15 March that year, the killer in Buffalo also streamed his attack online on Twitch, the popular gaming streaming service.
Twitch - owned by Amazon which is a signatory to the Christchurch Call - says the video was removed within minutes but some of the video clips and images are still circulating online.
“We see action being taken far faster and far more comprehensively. It shows how far we've come after the Christchurch attacks here and in New Zealand can take some pride for the leadership role we've taken,” Internet New Zealand chief executive Andrew Cushen told Today FM.
The prime minister's special representative on the Christchurch Call - Paul Ash - also said the agreement had also stopped earlier attacks and others from going viral online.
“If we move forward six or seven months from the attack in Christchurch, when there was an attempt at doing something very similar at a synagogue in Halle in Germany. We saw the crisis response measures implemented for the first time and it had a significant impact and slowing that attack down and slowing down the spread of the material,” he told RNZ’s Morning Report.
It’s good to know that this kind of abuse of the internet by extremists, racists and terrorists can be foiled.
But this week The Press revealed that people online had been sending footage of the Buffalo shooting to survivors of the Christchurch mosque massacre, using anonymous social media accounts that appear to have been set up just for their purpose.
It seems social media content that radicalises people in the first place is still spreading as wide and as easily as ever.
The US-based Center for Countering Digital Hate reported extreme Islamphobic content to major social platforms and found most ignored almost all the notifications.
“In the wake of what happened in Christchurch two years ago, the platforms all made a solemn vow to the families of those people who had died, promising them that they would do their utmost,” The Center’s chief Imran Ahmed told TVNZ.
“Nine out of 10 times, when faced with extremely graphic hate-filled content, including glorification of the killer in Christchurch, they failed to act,” he said.
“Asking social media companies not to behave in a sort of greedy, selfish, lazy way hasn't worked - because these are greedy, selfish, lazy companies. And that's why we need regulation,” he told TVNZ.
New research from The Disinformation Project - part of te Punaha Matatini at the University of Auckland - found just a dozen individuals at the occupation of Parliament recently created the online content that was most widely consumed - even though hundreds of people there created a huge volume of content.
The ease of sharing and the power of the social media algorithms gave them the sort of audiences only the biggest New Zealand mainstream media outlet can deliver.
The mathematical means amplifying extremism online
“Nobody really talks about the algorithms of these social media companies,” Imran Shakib from the Islamic Council of New Zealand told TVNZ.
“If you're somebody with a really ultra-right wing perspective, and you're searching ‘destroy Islam’ and these kinds of things, the algorithm is going to keep amplifying more and more things that will basically like support your views,” he said.
Prime Minister Jacinda Ardern identified exactly the same problem on TVNZ’s Breakfast.
“The next step for us (is) algorithm transparency. How do we deter people from going down those rabbit holes and (the) role that social media has to play in that?” she asked.
In Washington this week, David Shanks - who has just come to the end of five years as New Zealand's chief censor - told the global summit on social media run by the Center for Countering Digital Hate it is time to act on that.
“It will be difficult. It will take time and it will be expensive. Bit one thing these companies have is plenty of money,” he told the summit.
But for social media giants, their engagement-boosting algorithms are key commercial assets that they keep shrouded in secrecy for their own financial interest.
“The internet is primarily owned by a few very huge global conglomerates that are operating it for, for profit through profiling and packaging users and selling them to advertisers. So it's a very, very different world that we're in now from, from that that was being looked forward to in the late 90s. And I think we've got to adjust our approach accordingly,” David Shanks told Mediawatch.
“In the aftermath of that horrendous attack (on 15 March 2019), the platforms were effectively fighting their own machines and AI, which were doing what they were programmed to do, which is distribute material that people were reacting to and engaging with,” Shanks said.
“We don't know the detail of how the algorithms work. And to some extent, the platforms also really don't fully understand how the algorithms work, because they're ‘self-learning.’ They're engaging with billions of interactions online and adapting and evolving continuously. So the challenge is huge,” he said.
“But it's not too much different from where we got to with news media - and how we approached powerful technologies such as radio back in the day,” he said.
“It's really important that you have independence doing that job. But there are also some standards and some rules of the game that you need to abide by,” he said.
But he said big tech outfits acknowledge they need intervention.
“(Meta chief executive) Mark Zuckerberg said over two years ago: ‘Regulate us. We need help.’ "I suspect you know that that hasn't happened yet,” Shanks said.
He said large streaming platforms such as Netflix Neon now need to ‘self-assess’ their material according to New Zealand’s classification norms.
“We've introduced a system to allow them to do that. I've been very positively surprised by how engaged these big platforms are once they're clear on what the expectation is and how the system is going to work. So that gives me some cause to be positive into the future,” he said.
Another observer of the impact that the internet has had on our lives in New Zealand over the years is Jordan Carter.
He's stepping down as chief executive of Internet New Zealand - the nonprofit umbrella group which campaigns for an open, effective but safe and secure internet for New Zealand - after almost 10 years in charge.
Three years ago this week, he was in Paris alongside for the signing of the Christchurch Call.
Do the opaque algorithms of the social media giants mean that the much vaunted aims of the Christchurch call haven't actually achieved a whole lot in the end?
“Sometimes they need boldness to instigate action - and the call that came out of what happened in Christchurch was to be really clear in saying this kind of content should not be online,” Jordan carter told Mediawatch.
“It hasn't eliminated it, but it's made it less visible in the mainline platforms, I think that's a win,” he said.
“The crisis protocols and work that was done following up in getting those big players to really put some energy and some resources into responding faster ... means that there's less rapid propagation, and things are taken down more quickly,” he told Mediawatch.
But if these big names in tech are still not taking enforcing their own terms of services for users who are still publishing extreme stuff - which is them amplified by their own algorithms - can any more progress be made?
“Even if you get the content down quickly, it's still spread to the sub-communities and tiny, obscure forums that you're never going to be able to get completely on top of. That's kind of the nature of the internet - a broad set of distributed platforms,” he said.
“There's a bigger social problem now. It isn't just social media or internet forums where the Great Replacement Theory is being talked about. Sometimes it's even in mainline politics - and something that the Republican Party in the United States has began to talk almost openly about,” he said.
Will social media platforms ever surrender their algorithms to any outside agency to at least attempt to reduce the harm they can cause? Or will they ever be compelled to open them up to scrutiny?
“If we don't see that, we're not going to solve the bigger problem,” Jordan Carter told Mediawatch.
“Transparency was one of the agenda items in Christchurch Call summit in 2019 - and it is the area that’s made the least progress,” he said.
“We've had other industries in the past that have been highly lucrative with various trade secrets and so on. Think of pharmaceutical drugs - we end up putting patent laws in place that gave people a limited monopoly of those before they became generic and could be copied by others,” he said.
“The chemicals industry wasn't really effectively regulated before the 1970s - and it was killing people. Governments imposed standards because the effects on society were too dangerous. Chemical (companies) are still innovative, make money and the world hasn't fallen in.
“That's the way forward for these social networks,” he said.
“I don't mind if it's voluntary or mandated by legislation. But in the end, transparency allows researchers and practitioners' government agencies to at least begin to understand the impact of these algorithms,” he said.
“They're designed to harvest attention right, to generate engagement with the platform. Then, if we can really get a shared understanding of what they're doing to various people and communities, we can start to have a more mature and informed discussion about what to do about it,” he said.