There is growing pressure on Facebook to address the way it deals with violent and hateful content, following the live streaming of the Christchurch terrorist shootings on Friday that left 50 people dead.

A 17-minute live stream of the massacre was widely shared on the social media platform, prompting Facebook to act.

“In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload,” Facebook New Zealand’s Mia Garlick said in a statement.

“Out of respect for the people affected by this tragedy and the concerns of local authorities, we're also removing all edited versions of the video that do not show graphic content.”

Facebook also claims it quickly removed the Facebook and Instagram accounts of the shooter.

But for many this wasn’t good enough.

“Why is it that when it comes to making a dollar the big social media giants know everything about the users of social media, when it comes to detecting and preventing and discouraging hate speech, then they become Pontius Pilate and wash their hands of the whole affair,” said opposition leader, Bill Shorten.

“It's not good enough. We've got to work together with them.”

Prime Minister of New Zealand, Jacinda Arden, said Facebook has “further questions to answer”.

“We did as much as we could to remove, or seek to have removed, some of the footage that was being circulated in the aftermath of this terrorist attack,” she said.

“But ultimately it has been up to those platforms to facilitate their removal.”

Reuters reported that Arden has requested a meeting with Facebook to discuss live streaming.

Australia’s Prime Minister, Scott Morrison, called on social media users to take responsibility in not sharing such content.

“There's obviously a personal responsibility for those who would seek to share these images,” he said.

“Personally, I have not even seen them, I don't wish to see them. I don't wish to give the extremist terrorist, the right-wing extremist terrorist, the satisfaction that I would have looked at that, because I have no interest in what he has to say.”

In addition to the 28-year old shooter, an 18-year-old male has been charged with “inciting extreme violence” over the alleged re-posting of the live stream on Friday. He was refused bail in New Zealand and remanded in custody to 8 April.

What can be done?

The Christchurch attack is not the first time such an act has been posted and shared on social media.

In the United States, Facebook is currently being sued by the family of a man whose killing was recorded and made viewable in April 2017.

Facebook first rolled out its live streaming service – now named Facebook Live – in April 2016.

Within its first year, there were at least 50 acts of violence, including murders and suicides, broadcasted via the service, according to the Wall Street Journal.

Sites like Facebook and YouTube use AI-powered content recognition technology to compare recently uploaded material with known illicit material.

While this is the technology that presumably blocked the 1.2 million copies of the video upon upload, the fact at least another 300,000 copies of the video made it onto the platform suggests the technology is by no means perfect.

Human moderators are also employed by Facebook to keep an eye on potentially offensive content being posted.