Australia is at risk of falling behind when it comes to artificial intelligence (AI) regulation as its uses become more apparent every day and major economic blocs start carving out dedicated laws to govern the powerful technology.
While we have an entirely voluntary set of AI Ethics Principles – and a Responsible AI Network to help organisations adopt those principles – the government has not committed to legislating dedicated AI regulation in this term of government.
“AI is not an unregulated area,” a spokesperson for Industry and Science Minister Ed Husic told Information Age this week.
“Our copyright laws prescribe how data can be collected and used to train large language models, and privacy laws shape the kind of information that can be incorporated.
“Consumer protection laws also apply to the use of AI to mislead or deceive consumers.”
The government has sought, and received, advice from the National Science and Technology Council about the implications of generative AI but has not yet publicised the report, nor offered an updated policy position.
Meanwhile, each week seems to bring new examples of the economic disruption, social upheaval, and democratic interference AI tools could cause at scale whether it be workers fighting back against AI replacement, new ways to develop malicious software, scammers tricking Australians with AI-generated voices, a fundamental shift in education, or the potential for deep fakes to spread misinformation far and wide.
National AI ethics auditors
Dr Dana McKay is a senior lecturer in RMIT’s School of Computing Technologies.
She is concerned that a light touch or self-regulatory approach from the government would fail to adequately protect Australians, warning that in her view, “self-regulatory is synonymous with self-interest”.
Instead, she wants to see a national AI ethics body audit the use of these tools within the country to make sure they serve the national interest.
“The AI Ethics Principles are great but without external audits, how do we know that any of these things are happening?” Dr McKay told Information Age.
It’s not the speculative ‘existential risk’ of AI that McKay is worried about so much as the potential effects of unaudited systems trained on data scraped from the internet.
“When the Ubers and Googles of the world brought self-driving cars to Australia for testing, the first time any of them saw a kangaroo they freaked out because they weren’t designed for an Australian context,” she said.
“Using an AI that makes decisions about medicine, about whether you owe money to Centrelink, about which defence technology you build, is probably not going to take into account the specific Australian context.
“I don’t think AI is going to turn us all into paperclips,” Dr McKay continued, referencing the notion that a superhuman intelligence machine given a mundane task – like making paperclips – could have catastrophic consequences.
“But we are building these systems so fast with so little explainability built in, so little option to ask ‘why does it say that’, to take our personal data out of the training sets – there all these problems associated with it that haven’t been thought through.”
In March, Husic suggested business would have one chance to get AI adoption right, referencing the public outcry and swift legislative responses to the Medibank and Optus breaches.
“If businesses don’t get their frameworks right, the community expectation – absolutely understandably – is that governments will step in,” Husic said. “And then there’s a whole debate about if you are being too tough or too soft.
“Better to think ahead and get it right that way.”
EU, China leading the way
Overseas, the regulatory wheels have already started turning.
The EU has proposed a set of new laws that would strictly control the sale of AI with a risk-based approach depending on its use-case.
Any AI system that could be a threat to people’s safety, livelihoods, or rights would be outright banned – that includes governmental social scoring, like China’s notorious Social Credit System.
Companies that want to sell AI tools for ‘high risk’ areas like critical infrastructure, education, employee management, law enforcement, and migration – among others – would have to follow “strict obligations” before their systems can be sold within the large economic zone.
This has already seen OpenAI CEO Sam Altman – who recently testified in front of US congress about the need for AI – threatening, without any apparent irony, that his company would leave Europe if the strong laws get passed.
In China, lawmakers have circulated a draft law about managing generative AI that takes a strict approach to machine-generated content.
Under the proposed regulations, providers of generative AI in China would have to register with the government, provide a kind of watermark for images, videos, audio, and would “bear responsibility as the producer of the content generated by the product”.
Types of content would also be severely limited such that it may not subvert state power, incite separatism, contain false information, or upset the economic or social order.