Australia will follow in the footsteps of the European Union, the United States, the United Kingdom, Japan, Canada and other major economies by establishing its own AI Safety Institute, the federal government announced on Tuesday.
The new body, to be established in 2026 under the Department of Industry, Science, and Resources, will monitor and address potential risks of AI as the technology becomes more prevalent in the economy and in the workings of government.
The institute would also ensure AI companies comply with Australian laws, help to improve regulation across government, and “advise where updates to legislation might be needed”, Labor said.
Minister for Industry, Innovation, and Science, Tim Ayres, said while AI could “revitalise industry, boost productivity, and lift the living standards of all Australians”, the commonwealth also needed to keep Australians safe “from any malign uses”.
“The institute will be the government’s hub of AI safety expertise,” he said.
“It will operate with transparency, responsiveness, and technical rigour to make sure Australians are confident to use this game-changing technology safely.”
Assistant Minister for Science, Technology, and the Digital Economy, Dr Andrew Charlton, said the government was “working closely with industry, business, unions, and civil society to increase the uptake of AI in a way that is safe and responsible”.
The new institute would also collaborate with Australia’s National AI Centre and AI Safety Institutes in other nations, the government said.
‘An excellent move’: Advocates welcome AI announcement
The announcement of an Australian AI Safety Institute (AISI) follows years of advocacy from numerous industry experts, bodies, and some politicians.
Industry groups such as the Business Council of Australia (BCA) and the Australian Academy of Technological Sciences and Engineering (ATSE) had previously called on the government to establish an AISI.
Greg Sadler, a spokesperson for advocacy group Australians for AI Safety, welcomed the government's decision and argued the nation's prosperity and security would be determined by how it navigated advancements in AI technologies.
“A world-leading AI Safety Institute will give Australia the technical expertise to understand advanced AI, contribute to preventing serious risks, and put us on the path to global leadership,” he said.
Toby Walsh, chief scientist of the University of New South Wales AI Institute and a member of the ACS AI and Ethics committee — who previously said Australia “desperately” needed an AISI — reacted to Tuesday’s news with a simple, “Bravo!”

Minister for Industry, Innovation, and Science, Tim Ayres (centre), says an AISI will help Australians use AI safely. Image: Tim Ayres / LinkedIn
Other AI researchers also welcomed the move while awaiting further details on how Australia’s institute will operate.
Associate Professor Michael Noetel from the University of Queensland said while the institute’s creation was “an excellent move”, experts would be “watching the details closely”.
“One of the most important things to get right is leadership,” he said.
“Other parts of government are focused on current AI risks and on driving AI adoption.
“The AISI needs to be led by someone focused on the safety of frontier AI development and with credibility in Silicon Valley.
“They’ll need to attract and retain talent and negotiate with AI companies for access and transparency.”
Dr Alexander Saeri, an AI governance and risk researcher at the University of Queensland and MIT, said the institute would need funding not only to attract talent, but also to “access the computing resources necessary to probe frontier AI systems”.
“We also need a clear mission statement that centres the organisation's focus on frontier AI risks,” he said.
Opposition flags ‘mixed messages’ from government
Shadow minister for industry and innovation, Alex Hawke, said the Coalition wanted to see Australians protected from AI risks and “supported to access AI’s benefits”, but argued the announcement of an Australian AISI “does not make up for Labor’s dithering”.
“AI needs serious, coordinated national leadership,” he said.
“Instead we are seeing mixed messages from the government, including from dumped industry minister Ed Husic,” he added, referring to the former minister’s continued public backing of a dedicated AI Act.
The previous Labor government proposed mandatory guardrails for high-risk uses of AI under Husic, but the re-elected Albanese administration has been warned to pause work on such guardrails and has reportedly dumped plans for dedicated AI laws such as an AI Act.
Hawke added, “AI must not become a trojan horse for the union movement to kick-off more industrial interference,” and cited Minister Ayres — a former union official himself — having called for unions to take part in discussions on AI’s use in the economy.
The Australian Council of Trade Unions (ACTU) said it welcomed the government's AISI announcement, having campaigned for workers to share in the benefits of AI after some had “their content stolen by AI, their jobs disrupted, and their rights breached”.
ACTU assistant secretary Joseph Mitchell said establishing an AISI was “an important first step” in making sure Australian laws and standards were adhered to by AI models, including those developed overseas.
“Too many livelihoods have been stolen in the rapid development of these models,” he said.
“The first step in sharing the benefits is protecting against the potential harms.”
The government added that protecting Australians from the potential harms of AI would also be “a key pillar” of its upcoming National AI Plan, which it is expected to release before the end of 2025.
Labor's AI Plan for the Australian Public Service, released earlier this month, revealed plans to appoint chief AI officers in every federal agency in 2026.