Australia will introduce mandatory safeguards for high-risk AI use but steer clear of outright bans under a new plan aiming to walk the tightrope of mitigating risks and boosting productivity.

Industry minister Ed Husic on Wednesday unveiled the federal government’s interim response to industry consultation on the responsible use of artificial intelligence.

The government launched the consultation mid-last year and received 510 submissions by August.

Australia’s overarching response to the explosive growth in artificial intelligence technology will be “risk-based” and aim to prevent the high-level risks associated with it as much as possible while not stifling its innovation potential and huge economic benefits.

“We’re threading multiple needles at the same time with the way the technology is being developed,” Husic said in a press conference on Wednesday morning.

“We want innovation to occur, but if they’re going to present a safety risk then the government has a role to respond.

“We want to get the benefits of AI while shoring up and fencing off the risks as much as we can, and to design modern laws for modern technology.”

According to McKinsey research, the adoption of AI and automation could increase Australia’s GDP by up to $600 billion per year.

But this can only be done if there is trust in these technologies.

KPMG research found recently that only a third of Australians are willing to trust AI systems, and more than 70 per cent back the government in establishing guardrails.

“We acknowledge that the majority of AI is relatively low-risk, and we are working to introduce these rules for companies that design, develop and deploy AI in high-risk settings,” Husic said.

“Low trust is becoming a handbrake against the uptake of the technology and that’s something we’ve got to confront.”

This is the balancing act the federal government is attempting to perform in making any legislative changes around AI: to capitalise on its wealth of potential benefits while avoiding as much of the associated risk as possible.

“We’ve taken the concerns around AI seriously and sought to listen widely, respond thoughtfully and cooperate internationally on a complex issue.

“We also want to set up things so government can respond quicker to developments in the technology as they occur.”

Plan welcomed by the ACS

The Australian Computer Society (ACS) has welcomed the AI plan, saying it is an “important step” towards capitalising on the opportunities the technology offers.

“Given the profound changes AI will make to the workforce in coming years, ACS welcomes the federal government’s response and looks forward to working with the proposed Temporary Expert Advisory Group to ensure Australia has regulation that’s fit for purpose over the coming decades,” said ACS interim CEO Josh Griggs.

“Consulting with experts and industry leaders is going to be critical in ensuring that any regulation reaps the benefits of AI while mitigating the real risks presented from misuse of the emerging technology.

“We look forward to working with the federal government, industry, educators and all key stakeholders to ensure Australia maximises the benefits from AI and associated technologies over the coming decade."

The ACS 2023 Digital Pulse report found that 75 per cent of workers will see their roles changed by AI, with the impact felt across the Australian economy.

Guardrails for high-risk use

The response will see the federal government distinguishing between the use of AI in high-risk settings, such as health and law enforcement, and its more general application, such as through generative AI tools like ChatGPT.

For high-risk use, the government is set to introduce mandatory safeguards that may include independent testing of products before and after its release, continual audits, and labelling for when AI has been used.

The government is also mulling requiring organisations have a dedicated role focusing on AI safety under this mandatory code.

These uses may include self-driving vehicle software, tools to predict the likelihood of someone reoffending and software to sift through job applications to identify candidates.

The current legal landscape in Australia does not adequately address the risks associated with the use of AI in this manner, the government said in its response.

“Existing laws do not adequately prevent AI-facilitated harms before they occur, and more work is needed to ensure there is an adequate response to harms after they occur,” the government response said.

“The current regulatory framework does not sufficiently address known risks presented by AI systems, which enables actions and decisions to be taken at a speed and scale that hasn’t previously been possible.”

The response acknowledged that other jurisdictions, most notably the European Union, are moving to outright ban the highest risk usage of AI, but did not flag that Australia will be following suit.

Late last year, the European Union agreed on a landmark Artificial Intelligence Act, which will ban the use of AI for high-risk activities such as in social credit systems and biometric surveillance.

The Australian government is yet to decide whether it will introduce a similar standalone piece of legislation or make amendments to existing laws in order to follow its AI policy agenda.

Voluntary rules for low-risk use

For lower-risk AI use, the government will introduce a voluntary scheme including an AI content label involving “watermarks” to identify when AI has been used to make content.

An expert advisory committee will be stumped up to guide the development of the mandatory code, and the federal government is preparing to consult on details of legislation.

The mandatory code will be in place by the end of the year, Husic said, with plans for the voluntary rules to be in place before this.

The interim response flagged that “frontier” AI tools like ChatGPT may require future targeted attention but did not outline any steps the government may take.

“It was also highlighted that AI services are being developed and deployed at a speed and scale that could outpace the capacity of legislative frameworks, many of which have been designed to be technology-neutral,” it said.