Unchecked artificial intelligence technologies could increase the risk of widespread unemployment, AI-enabled terrorism, and humanity losing control of its own creation, according to a report by almost 100 experts in the field.
Published last Wednesday, the first-ever International AI Safety Report is the work of 96 experts from 30 countries who examined current scientific evidence around risks and mitigation techniques for AI, with the goal of informing global policy decisions.
While the experts had different views of how likely or imminent they considered the worst possible impacts of AI could be, they agreed existing evidence showed there were many potential risks.
“Some think that such risks are decades away, while others think that general-purpose AI could lead to severe public safety dangers within the next few years,” the report said.
What remained clear was AI would continue advancing — as it has already moved from initially struggling to create coherent text to being able to generate life-like photos, videos, and deepfakes, generate computer code, and conduct complex research.
Australia’s Professor Bronwyn Fox — deputy vice chancellor of research and enterprise at the University of New South Wales, who was recently appointed an Officer of the Order of Australia (AO) — contributed to the report.
She told Information Age the report showed it remained critical “to maintain public trust in AI to capture the benefits that AI will bring to us in society, including uplifting productivity”.
AI ‘could advance slowly, rapidly, or extremely rapidly’
AI models could be scaled up further for a few more years, but research breakthroughs and novel approaches would likely also be needed for innovation to continue, the report found.
Even since the report was written, ChatGPT creator OpenAI released its more advanced o3 reasoning model, as well as a Deep Research model which it said used reasoning to complete complex “multi-step research tasks”.
“The o3 results are evidence that the pace of advances in AI capabilities may remain high or even accelerate,” said the report, which reiterated that energy consumption by AI data centres was growing as models grew larger.
“The final report also emphasises the intensified energy consumption linked to the exponential increase in adoption of AI, paving the way for new solutions to meet this challenge, including utilising green energy sources and reducing water consumption,” Professor Fox said.
However, the recent release of China’s popular DeepSeek AI chatbot — which came as a surprise to the Western technology industry — has challenged US companies after purportedly being developed at a lower cost and requiring less computing power.
This unpredictability in AI development has sometimes made it difficult for researchers and policymakers to standardise risk management approaches and coordinate internationally, the report found.
Professor Fox said the progress seen in OpenAI’s o3 reasoning model and the lower cost and power requirements of DeepSeek underscored “the high and even accelerating rate of advancement in AI capabilities”.
An AI expert from China also contributed to the report, in a rare piece of collaboration with Western counterparts.
Australia's Professor Bronwyn Fox was among the 96 AI experts who contributed to the report. Photo: Supplied
AI could have ‘significant effect’ on job market
Evidence of more risks had emerged as AI had become more capable, experts said.
"Recent advances in general-purpose AI capabilities — particularly in tests of scientific reasoning and programming — have generated new evidence for potential risks such as AI-enabled hacking and biological attacks," they said.
Sophisticated AI agents which use computer systems to complete tasks — and which are expected to become more widely accessible in 2025 — could also have “a significant effect on the labour market” by automating many tasks, they added.
This could include large-scale job losses, experts said, which would potentially be partly offset by the creation of some new occupations.
Experts warned of other harms which could be caused by the use of AI, including AI-enhanced scams, bias against certain groups of people or opinions, privacy violations, and the continued rise of non-consensual image-based abuse and AI-generated child sexual abuse material (CSAM).
‘AI does not happen to us’
While “no combination of techniques” could fully resolve or prevent harms from being caused by the use of AI, experts said, they agreed the technology could benefit society if the correct decisions were made.
"AI does not happen to us: choices made by people determine its future,” they said.
Canadian AI pioneer Yoshua Bengio, who chaired the report, said human actions and agency were crucial, as “both very positive and very negative outcomes are possible”.
“AI remains a fast-moving field,” he wrote.
“To keep up with this pace, policymakers and governments need to have access to the current scientific understanding on what risks advanced AI might pose.”
Australian government figures and Professor Bronwyn Fox are expected to take part in the Paris AI Action Summit later this month, where countries are expected to sign a declaration on AI development.