Tech companies and world leaders have vowed to stop the spread of terrorist and violent extremist content online.
Representatives from 18 countries and eight global technology companies have signed the 'Christchurch Call', a set of industry standards aimed at preventing the use of online services to disseminate objectionable content.
Prompted by the terrorist attacks in Christchurch which were livestreamed on Facebook and later uploaded millions of times, New Zealand Prime Minister Jacinda Ardern and French President Emmanual Macron hosted the delegates at a meeting in Paris today.
World leaders from 18 countries including delegates from Australia, Canada, UK, Japan, India, Indonesia and Japan have committed to the Christchurch Call, however the US is a notable exception.
Google, YouTube, Amazon, Facebook, Twitter, and Microsoft are amongst the tech companies that have signed the non-binding agreement.
Ardern said the group had collectively taken a stance against the proliferation of violent content spreading on the internet.
“We’ve taken practical steps to try and stop what we experienced in Christchurch from happening again,” Ardern said.
“The March 15 attack was shocking in its use of social media as a tool in the act of terror and with the Christchurch Call we have taken a unique approach to solving this problem.”
The Christchurch Call outlines legislative and social action such as:
· Providing more education and media literacy to counter terrorism and violent extremism.
· Enforcing laws that prohibit the production or dissemination of extremist content.
· Encouraging ethical standards and supporting frameworks for the media to avoid amplifying terrorist and extremist content.
· Considering regulatory action that prevents online services being used for the dissemination of extremist content.
Silicon Valley’s largest tech companies committed to:
· Taking specific measures to prevent the uploading and dissemination of extremist content on social media and other platforms.
· Providing more transparency with the implementation and enforcement of community standards.
· Reviewing algorithms and other processes that may drive users toward terrorist content.
· And immediately mitigating the dissemination of extremist content on livestreaming platforms.
Facebook Live
Facebook was heavily criticised for its handling of the Christchurch terror attack.
On Wednesday, the social media giant released a blog post explaining how they are adapting policies and features of Facebook Live to mitigate any future misuse.
A one-strike policy will see serious policy violators immediately banned from the Live feature. Until now, only repeat offenders would cop a ban.
Facebook will also begin working with three major US universities to research new techniques for detecting manipulated media, and distinguishing between unwitting posters and the original publishers of fake pictures, video and audio.
“This work will be critical for our broader efforts against manipulated media, including deepfakes,” Facebook VP Guy Rosen said in the post.
“We hope it will also help us to more effectively fight organized bad actors who try to outwit our systems as we saw happen after the Christchurch attack.”
Consistent with human rights law
The Christchurch Call contains several instances of the phrase ‘in a manner consistent with human rights law’ which University of Sydney’s expert in online media regulation, Dr Fiona Martin, said should allow for the activities of political dissidents on these platforms.
“In Myanmar, for example, Facebook took down accounts of Rohingya activists trying to document the genocide there,” she said.
“This leaves space for people to document violence against minority groups.
“One of the difficulties, if you’re using AI to filter out violent content, is that it can’t distinguish what the purpose of the posting is just by looking at the images.
“So there is the danger of people who are rightly documenting human rights offenses being censored.”
Martin said she would like to see American companies like Facebook seek more global perspectives on their decision making.
“I’d like to be optimistic about Facebook working with closely with US universities,” she said.
“But they are still working within US free speech paradigms, and whether the measures Facebook adopts reflect what the EU might come up - with or other countries would come up with - is unclear.”