It might be billed as the biggest technological advancement since the dawn of the internet, but Australians are developing a genuine mistrust and fear of artificial intelligence (AI) tools.
Just 30 per cent of Australians believe the benefits of AI outweigh the risks.
Australia also trails behind other countries in realising the benefits of AI (55 per cent vs 73 per cent globally report experiencing benefits).
This public fear and mistrust is being blamed for slower adoption rates, with 37 per cent of respondents to a global survey revealing that they have personally experienced or observed negative outcomes ranging from inaccuracy, misinformation and manipulation, deskilling and loss or privacy or intellectual property.
Released this week, the global study, Trust, attitudes and use of artificial intelligence: a global study 2025, is the most comprehensive into the public’s trust, use and attitudes towards AI.
Researchers, led by Professor Nicole Gillespie, chair of Trust at Melbourne Business School, asked 48,340 people across 47 countries what they thought of AI, revealing that as well as being wary, Australia ranks the lowest globally on acceptance, excitement and optimism about adopting AI, alongside New Zealand and the Netherlands.
The research also found that half of employees are using AI in ways that contravene company policies, including uploading sensitive company information into free public AI tools such as ChatGPT.
“Psychological safety around the greater use of AI in work is critical.
“People need to feel comfortable to openly share and experiment with how they are using AI in their work and learn from others for greater transparency and accountability,” Professor Gillespie said.
Accessible training, workplace support and public education is required to build trust, Professor Gillespe said.
“AI literacy consistently emerges in our research as a cross-cutting enabler: it’s associated with greater use, trust, acceptance and critical engagement with AI output and more benefits from AI use, including better performance in the workplace.”
Risky business
The combination of rapid adoption, low AI literacy and weak governance is creating a complex risk environment, said John Munnelly, chief digital officer at KPMG Australia.
“Many organisations are rapidly deploying AI without proper consideration being given to the structures needed to ensure transparency, accountability and ethical oversight – all of which are essential ingredients for trust,” Munnelly said.
Confused and scared
AI strategist Brooke Wright agrees that AI is having a PR crisis in Australia because it’s giving MLM (multi-level marketing) vibes instead of meaningful innovation.
“The mistrust comes down to two things: fear and confusion.
“Most people don’t really understand how AI works, which makes it feel like a black box.
“Add privacy concerns and some very loud doomsday headlines, and no wonder 36 per cent of Aussies are sceptical,” Wright says.
“If we want to build trust in AI, we need to stop the hype and start the education.
“Give people tools they can actually use.
“Show them how AI supports their work – not replaces it. That’s how we move from fear to empowerment,” Wright says.
Traction troubles
The impact of the mistrust could be making it challenging for the myriad of AI-related startups to get a foothold in their respective industry sectors.
AI founder Denver Naidoo, based in Brisbane, has been working to launch his AI workforce platform startup, Zeligate since 2023. His experience tells him that Australians don’t hate AI – they just don’t trust what they can’t see, understand or control.
“The mistrust isn’t just about the tech itself, it’s about how it’s being communicated, governed and applied.
“Right now, most people see AI as something happening to them, not with them.
“That naturally triggers resistance.
“People are wary of powerful technologies being controlled by unaccountable companies.
“Without clear, local governance, Australians are right to ask who is in control,” he said.
To build trust, companies need to put humans at the centre and build trust into the default settings, including consent, transparency and control being baked into all AI design.
“If trust isn’t automatic, it won’t scale,” Naidoo said.
Meanwhile, on the eve of the federal election, AI safety advocates are pushing for government safeguard mandates to be introduced.
Australians for AI Safety commented on the research, saying it’s further proof that the major parties need to take swift action to make AI a safe space for all.
The independent body argues that robust safety regulation allowed the aviation sector to flourish.
Similarly, it said that AI innovation will only thrive once independent testing and clear statutory duties builds public trust.