Concerned that Australia’s Online Safety Act is inadequate to manage the “serious risks” posed by issues like online hate and deepfakes, the government has brought forward by a full year a planned review that could impose a legal duty of care on digital giants.

The review – which will be headed by former ACCC chair Delia Rickard – will see the release of an Issues Paper for public consultation by the end of June, with interested parties encouraged to share their views to shape a final report that will be handed to government by 31 October.

Although the original legislation provided for three years from its 24 January 2022 commencement before a review would be conducted, that legislation predates the explosion of generative AI (gen AI) technology into the public consciousness in late 2022.

Since then, government policymakers have scrambled to contain a fast-expanding arsenal of AI powered tools that is enabling the malicious creation and dissemination of sexualised deepfakes, personalised phishing campaigns, online hate, and other content affecting everybody from Nine News journalists to Taylor Swift, to the businesses being scammed out of millions.

Last March, the government’s response to the Social Media and Online Safety inquiry identified gaps in the legislation and in November, it telegraphed the review’s acceleration as part of a “holistic assessment” of the legislated online safety framework policed by the eSafety Commissioner – which, among other things, includes a Basic Online Safety Expectation (BOSE) Determination that is being revisited with an “express focus on minimising the creation and amplification of unlawful or harmful material” by the use of gen AI.

“So much of modern life happens online,” Minister for Communications Michelle Rowland said in announcing the latest review, “which is why it is critical that our online safety laws are robust and can respond to new and emerging harms.”

“Our laws can never be a set-and-forget, particularly as issues like online hate and deepfakes pose serious risks to Australian users.”

Is eSafety still fit for purpose?

The newly released terms of reference (ToR) for the review – which come on the heels of a government push to fast-track laws against malicious ‘doxxing’ – note that new technologies have delivered “significant benefits across society and the economy [but] provide avenues for malicious activities that can harm individuals and erode social cohesion.”

Ten different matters are on the review’s agenda, including an evaluation of the effectiveness of legislation regulating such harms as cyber bullying, non-consensual sharing of intimate images, cyber abuse material targeted at an Australian adult, material that depicts “abhorrent violent conduct”, and the Online Content Scheme.

Rickard’s review will also evaluate the operation and effectiveness of BOSE, as well as exploring whether additional legislation is needed to cover online behaviour that is not already explicitly covered by existing schemes – including online hate; volumetric “pile-on” attacks; technology-facilitated abuse and gender-based violence; and online abuse of public figures.

Also on the table is an evaluation of the potential safety harms raised by emerging technologies including gen AI, immersive technologies, recommender systems, end-to-end encryption, and changes to technology models such as decentralised platforms.

A review of regulatory arrangements will consider whether changes to the regulatory arrangements, tools and powers available to the eSafety Commissioner are warranted – including the potential introduction of a UK styledduty of care’ towards digital giants’ users, which is described as mirroring existing work health and safety legislation, and would mark the culmination of years of discussion and steady progress towards such a duty.

Rickard will also consider whether beleaguered digital companies should be forced to contribute towards the cost of eSafety’s regulatory activities – a proposal that has proved controversial in Europe, where Meta and TikTok recently sued regulators to get out of contributing to the estimated $74 million (€45 million) kitty required to fund enforcement this year alone.

Consultation on the Act will take place amidst a growing effort to boost awareness of eSafety, and the online safety laws it enforces – which recently saw the launch of a five-month multilingual advertising campaign aimed at educating Australians about current laws and the organisation’s powers to enforce them.

Submissions to the BOSE Determination review close this Friday, 16 February.