Renowned artificial intelligence expert Toby Walsh criticised the Australian government and Big Tech firms for what he argued were inadequate approaches to AI regulation, in a speech at the National Press Club in Canberra on Wednesday.
Walsh is chief scientist of the University of New South Wales’s AI Institute and one of 12 independent experts appointed by the federal government to a temporary AI expert group which briefly provided advice to politicians in 2024.
He accused the Australian government of going against international trends by “not investing in the upsides” of AI, “not regulating the harms of AI adequately”, and allowing political discussions to become "dominated by Big Tech”.
Walsh was critical of the Labor government dumping plans for dedicated AI laws to instead rely on existing legislation to regulate the technology, and said there were "fresh harms that AI is beringing into our lives that will need fresh laws".
"For example, we had to pass new laws to criminalise distributing deepfake nudes," he said.
"… I have just one question for our politicians that I haven't seen answered well: What makes Australia so special that we’ll see the benefits of AI without making the sort of new AI laws that other nations are introducing?"
Walsh said the government had ignored “almost all” of his advice, including his belief that Australia needed dedicated AI legislation.
Walsh also criticised the government for scrapping plans for a permanent AI advisory board – which was quietly scrapped in August 2025 after the government spent $188,000 deciding on 12 nominees from 275 applicants, according to documents tabled in parliament last week.
Industry Minister Tim Ayres and Assistant Technology Minister Andrew Charlton announced in December 2025 that the government would instead establish an AI Safety Institute at a cost of $30 million, to sit inside the Department of Industry, Science, and Resources.
While Walsh welcomed the announcement of an AI Safety Institute at the time, he questioned in Wednesday's speech why the government would not want “independent advice from AI experts – advice given at no cost, and in private”.
“I can assure you that I and my colleagues also from the temporary AI expert group will continue to offer advice fearlessly, whether they want it or not,” he said.
“But now we’ll offer this advice not in private but in public, as I am doing today – I am sure they will find this much more uncomfortable than I will.”
AI’s ‘boom and doom’
Constructive uses of AI would “transform our lives” for the better much like electricity had, said Walsh, who argued the technology had created both “boom and doom”.
He said he was “angry” about scams and illegality enabled by AI, the “large-scale theft” of intellectual property used to train generative AI models, the loss of some jobs to AI, and the potential effects of AI chatbots on mental health.
Walsh said his anger had turned to “outage” over the case of Adam Raine, an American high school student whose parents sued ChatGPT maker OpenAI after its chatbot allegedly supplied the 16-year-old with information on suicide methods and helped him plan his death.
“I hope they win,” he said.
“… His parents' lawsuit alleges that OpenAI rushed ChatGPT-4o to market without adequate testing.
“But OpenAI's own policy documents, uncovered in the discovery phase of this trial, reveal something far more damning.
“To encourage engagement, the company made conscious decisions to remove longstanding safeguards from ChatGPT in the weeks and months leading up to his death.”

Toby Walsh says OpenAI did not include enough safety protocols in ChatGPT-4o, which has first released in May 2024. Image: Shutterstock
Walsh also referred to other similar lawsuits now facing OpenAI, and cited data released by the company in October which indicated 1.2 million people sent messages to ChatGPT every week which included “explicit indicators of potential suicidal planning or intent”.
“And some of those people are here in Australia,” Walsh said.
“I know because some of them or their loved ones are contacting me.
"They tell me how the chatbot confirms their wild theories, that the chatbot tells them, to quote one email, that they’ve 'cracked the code', that they’re 'the only one that could.'"
Walsh argued AI chatbots were “designed to be sycophantic” and agree with their users, and suggested “the careless people in Silicon Valley would make less money” if they were instead designed to minimise engagement or challenge what a user said.
He also specifically called out Google’s AI Overviews for driving traffic away from news websites, social media giant Meta for profiting from AI-enabled scams and “facilitating a huge amount of crime”, and Elon Musk’s X for “weaponising the abuse of women” with AI nudify software.
Are we 'repeating the mistakes of social media'?
While Walsh argued Australia had seen “real success” and inspired legislative change in other nations by banning under-16s from holding social media accounts in 2025, he said he feared the world was “repeating the mistakes of social media” by not regulating AI more tightly.
"We’re about to supercharge the sort of harms we're seeing with social media with an even more powerful and even more persuasive technology,” he said.
“What I fear most is that I’ll be back here in three or four years’ time saying, ‘We tried to warn you, but another generation of young Australians has been sacrificed at the profits of Big Tech.’”
If Australia did not maintain its own sovereign AI infrastructure and talent, it would be “a mistake”, Walsh added.
The nation needed to better support locally-owned AI systems, Assistant Minister Charlton told the Australian Business Economists Conference on Tuesday.
He argued hands-off regulation, which he described as “lazy Liberal laissez-faire", would see the nation “rent the platforms, import the intelligence, and export the profits”.
“We cannot sit back and lease someone else’s brainpower,” he said.
“We must build it here, back Australian founders, equip Australian workers, and ensure that AI is something we design, deploy and lead, not something we merely import and consume.”
Walsh said that would only occur if "we have the capability, the compute, the people, and the data here in Australia – not in some data centre in Silicon Valley".