The government is prepared to regulate artificial intelligence if businesses fail to develop and use the new technology in a way that is responsible and meets community standards, Industry and Science Minister Ed Husic said this week.
Speaking at the National Press Club on Wednesday, Husic answered a question about how the government is planning to mitigate the risks of AI, just days after the Guardian reported that AI-generated voices can fool verification software used by Centrelink and the Australian Tax Office.
“If we don’t get it right, you’ll see the type of response that’s come out of what you’ve seen with Optus and Medibank,” the Industry and Science Minister said, referencing legislation that quickly followed two major data breaches last year.
“If businesses don’t get their frameworks right, the community expectation – absolutely understandably – is that governments will step in.
“Better to think ahead and get it right that way.”
Husic said he has already tapped the National Science and Technology Council for advice about how to manage generative AI as it continues to be developed with surprising and impressive results.
“The National Science and Technology Council is made up of some of the nation’s most pre‑eminent scientists, so we are expecting that report shortly to help inform policy work and to do that,” Husic said, noting that parliament is “certainly thinking through” further inquiries into AI.
Husic went onto to highlight the recent announcement of a Responsible AI Network run by national science agency the CSIRO, a group aimed at helping guide AI adoption among Australian businesses in the hopes that a form of industry self-regulation can help expand the technology’s use in a way that keeps people and society safe.
When Labor came into power it inherited the previous government’s self-regulatory approach to AI in the form of of a voluntary set of AI Ethics Principles.
Years ago, businesses like NAB and Telstra trialled incorporating the principles which include ensuring that AI systems respect human rights, uphold privacy rights, and have human accountability.
But there have long been critics of self-regulatory models for AI, including the Australian Human Rights Commission which, in a 2021 report, said AI ethical frameworks tend to have “limited positive impact in protecting human rights and addressing broader harms”.
“Too often, there is a wide gap between the principles espoused in AI ethical frameworks and how AI-powered products and services are actually developed and used,” the Commission said. “This can contribute to irresponsible and damaging use of AI.”
It went on to say that ethical frameworks “should not be a substitute for the law” but instead should fill “gaps where detailed legal regulation is inappropriate and by providing ethical guidance where the law does not, and should not, articulate a clear rule to follow”.
The Department of Industry’s own Strategic Policy Division levelled similar criticisms at the government’s approach during public consultation of the AI Ethics Principles in 2019, saying in its submission that existing legal frameworks would be “inadequate to address the challenges presented by AI”.
Uncertainty around AI principles could, the department warned, “increase the costs of AI systems to either developers, operators or subjects of AI systems” by leaving clarification up to the courts in a process of “lengthy, complex and uncertain litigation”.