The government has “not moved fast enough” on artificial intelligence (AI) regulation and it could result in serious harm to Australians, one expert has warned in a week where the issue has gained international momentum.

US President Joe Biden this week signed an executive order for new AI safety standards that require companies to notify the US government when they begin training a model with potential risks to security, the economy, or health and safety.

On the same day, the Group of Seven (G7) nations agreed to guidelines for AI regulation. Europe has an AI Act, it just needs to work through the political process before coming into law.

So, what’s taking Australia so long?

“I don’t think there’s much excuse for it,” Professor Kimberlee Weatherall, a chief investigator with the ARC Centre of Excellence for Automated Decision-Making and Society (ADMS) told Information Age.

“We know some of the things we need to do, and we just need to pass the legislation. But the tendency of government to date is to prefer more consultation.”

Consultation is undoubtedly important, few people will argue to the contrary, and Professor Weatherall is an active contributor to the consultation process.

Most recently she has argued, as the lead author for the ADMS’s submission to the Department of Industry and Science’s responsible AI discussion paper, that there are serious gaps in Australia’s existing laws.

Responding to previous criticisms that the government was being too slow on AI regulation, the office of Minister for Industry and Science Ed Husic hid behind a statement that AI was not, in fact, “an unregulated area”.

This is a common refrain among proponents of light touch regulation but, while technically correct, it fails to account for myriad gaps around consumer protection, administration law, discrimination, copyright, data protection, and privacy that Professor Weatherall and the ADMS have already pointed out.

“It’s a problem that we haven’t updated either privacy law or consumer protection adequately over the last decade and now we’re on the back foot,” Professor Weatherall told Information Age.

Major Privacy Act reform is coming, yes, but it has been an excruciatingly slow process, especially for a group of civil society organisations, digital rights activists, and academics who last week urged the government to get a move on and start implementing changes from a review that was handed down in February.

Professor Weatherall said the government should at the very least make sure the public sector is “deploying technology mindfully and properly”.

“The risk is that Australians are harmed by these technologies, that people will be discriminated against, or that technologies will be deployed without proper testing and without proper monitoring,” Professor Weatherall said.

“Robodebt was an example of that. It wasn’t AI, but it showed how widespread deployment of technology that is not tested as it ought to be, and is not legally considered as it ought to be, can harm thousands of people.”

A guarantee for the public service, along with updating our existing laws, would be “two basic steps that would make a big difference” in the race to regulate AI, Professor Weatherall said, even if it still leaves open how to handle “the private sector question”.

Why are we still talking about this?

As some have noted, such is the problem with the government’s approach to AI regulation: it hides behind existing laws but doesn’t prioritise their change, all the while insisting on additional consultation for issues that have already seen layers upon layers of discussion papers and consultation.

The Department of Industry alone has had a total of five AI-related consultations since 2019, for example, and there is a wealth of existing inquiries, regulatory action, and test litigation from which a willing government could draw.

As the University of Technology Sydney’s Human Technology Institute pointed out in its submission to the government’s latest AI discussion paper, no less than five Commonwealth entities have conducted relevant inquiries or published reports since 2019.

In 2021, the Australian Human Rights Commission published a report dedicated to how technology like AI raises unanswered questions around anti-discrimination law.

The Australian Competition and Consumer Commission (ACCC) has published six interim reports from its Digital Platform Services Inquiry that touch on the impact big tech has on society, including the role of data governance and AI.

The Australian Communications and Media Authority (ACMA) has published papers on misinformation, AI in the media, and even one on natural language processing back in 2021 – well before ChatGPT was in vogue.

More recently, the Attorney-General’s Department completed its review of the Privacy Act which had recommendations affecting data protection, privacy, and the role of automated decision-making in our society.

Every step of the way, the public has been consulted, recommendations have been made, and successive governments have dragged their feet.

The government’s approach hasn’t gone unnoticed, either, and recently resulted in lightly-worded criticism from a handful of business lobby groups – including the Australian Computer Society (ACS) which owns Information Age – recently calling for a “whole-of-government approach” to AI reform.

In its open letter, the group lamented a lack of “national leadership” on this important issue, warning that “Australia will fall behind the world in seizing the economic opportunity of AI and fail to prepare our workers with the skills they need to use it in their jobs”.

While Australia waits for any form of meaningful AI regulation, Industry Minister Ed Husic has headed overseas to the UK for that country’s AI Safety Summit.

Husic’s office told Information Age that the minister wouldn’t want to comment on an article questioning his government’s regulatory approach while he was in the UK demonstrating to our allies how Australia deals with one of the most pressing technological issues of all time.