US President Joe Biden has signalled his intention to repeal a law that protects internet companies from being held responsible for the speech on their platforms.

“I’m calling on Congress to get rid of special immunity for social media companies and impose much stronger transparency requirements on all of them,” Biden said at the United We Stand Summit, an event seeking to combat hate-fuelled violence held last week.

He spoke of a history of violence against groups of people in the US, saying there was “a through-line of hate from massacres of Indigenous people, to the original sin of slavery, the terror of the Klan, to anti-immigration violence against the Irish, Italians, Chinese, Mexicans”.

“That through-line of hate never fully goes away. It only hides.”

Biden said this history of hate has been “given too much oxygen" in recent years in politics, the media, and online, “all for power and profit”.

Since the Christchurch Massacre was livestreamed on Facebook, there has been both greater expectations for social media companies to do better, combined with copycat shooters who have posted hate-filled manifestos online and shared videos of their horrific acts of violence.

Earlier this year, an 18-year-old murdered 10 people in the US, going live on popular video game streaming platform Twitch.

Only a few weeks ago, internet infrastructure Cloudflare stopped providing protection services for anti-trans harassment site Kiwifarms whose community had caused at least three suicides.

Section 230

Biden's rhetoric last week has been interpreted as a sign he intends to repeal, or at least reform, the notorious Section 230 of US Communications Decency Act – something he said he would do prior to the 2020 election.

Commonly known as the ’26 words that built the internet’, Section 230 broadly shields platforms from responsibility for the third party content they host.

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” Section 230 reads.

Without the freedom of Section 230, it is hard to imagine social media as we know it – sites and apps on which millions of people can post potentially libellous and defamatory material simultaneously without being pre-moderated.

Enacted in 1996, Section 230 has in recent years become a bit of a bogey man for US policymakers looking to gain more control of online discourse.

Former President Donald Trump threatened to remove Section 230 protections for social media platforms who he said were “stifling free and open debate by censoring certain viewpoints” – by labelling Trump’s tweets as misinformation, for example.

Change the internet

Tama Leaver, Professor of Internet Studies at Curtin University, said a repeal of Section 230 could spell disaster for smaller social media companies who can’t afford a radical shift in the way content needs to be moderated.

“The big platforms like Facebook and Google, they’ve got capacity and money to make those changes if they have to,” he told Information Age.

“The real question is what kind of competition there will be if new entrants have to moderate content in advance from day one.”

Leaver thinks Biden could be signalling changes to Section 230 as a way of getting social media platforms to ramp up their moderation efforts.

“The global need to combat hate speech and misinformation is very real and immediate, so if this is the hammer you need to bang on the table to get tech companies to pay attention, that’s not such a bad thing.”

The effects of Section 230 are felt wherever these companies operate freely, including in Australia where courts have explored whether internet services like Google or Facebook can be deemed ‘publishers’.

In August, Google successfully appealed to the High Court to overrule a Victorian decision that said it acted as a publisher when it allowed links to a defamatory news article to appear on Search.

The High Court has also ruled that people and organisations, like news outlets, can be liable for defamatory comments left on their Facebook posts – even if Facebook is not liable.