A new open letter signed by prominent figures in the tech world claims artificial intelligence (AI) carries the same risks to human life as nuclear weapons or pandemics, but digital ethicists and researchers are saying it’s all hype that ignores the technology’s real dangers and promotes a strange ideology.

Academics and executives from the likes of Microsoft, Google, and OpenAI signed a simple 22-word statement from the Centre for AI Safety: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

It echoes an open letter from March in which signatories called for a six-month pause on AI development to mitigate “profound risks to society and humanity”.

But people like Professor Tim Miller, co-director of the University of Melbourne’s Centre of AI and Digital Ethics, are concerned the conversation around AI vastly overstates its power and ignores the real harms the technology can cause.

“I would say treating AI on the same level as climate change, nuclear war, and pandemics is ridiculous,” Professor Miller told Information Age.

“We've seen pandemics happen, climate change is happening right now, and there are countries that have nuclear weapons – these are all things that can happen tomorrow.”

On the other hand, the idea of a superintelligent AI agent developing its own goals, controlling physical resources, and deciding to destroy humanity is “speculative”, for Professor Miller, and relies on intelligence emerging from advanced forms of predictive text.

Dr Simon Goldstein is a senior research fellow at the Australian Catholic University and a signatory to the Centre for AI Safety statement.

He disagrees that framing AI as just as big a threat as nuclear war, climate change, or pandemics diminishes their impacts.

“Sadly, the biggest difference between AI and these other threats is that AI is completely unregulated,” he told Information Age.

“Can you imagine letting private companies develop nuclear weapons or super viruses for profit without any government oversight? That needs to change quickly.”

Dr Goldstein agrees with AI pioneer and Turing Award winner Geoffrey Hinton that technologists are creating “a new form of life” – one that might “replace us as the dominant form of life on this planet”.

As such, AI regulations should be focused, in Dr Goldstein’s view, on monitoring AI for “dangerous capabilities”.

“Relevant here is the ability to hack computers, the ability to intentionally deceive human users, the ability to persuade human users to change their beliefs, and the ability to make complex plans,” he said.

AI doom and gloom

Professor Miller is not alone in his scepticism of the types views held by Dr Goldstein.

Two former Google AI ethics researchers, Dr Timnit Gebru and Dr Margaret Mitchell, have helped formalise an opposition to the science fiction-like narrative of AI harm pushed by the likes of the Centre of AI Safety.

Dr Gebru and Dr Mitchell – alongside computational linguists Emily M Bender and Angelina McMillan-Major – co-authored a seminal paper about the real dangers of language models in 2021.

Far from speculating on machines that seek to enslave humanity, the quartet warned of the potential for language models – like the ones that power ChatGPT, Microsoft Bing, and Google Bard – to further entrench bias about groups of people, or mechanically spread misinformation to unsuspecting users.

The last six months has offered a crash course in the risks of natural language processing systems since OpenAI unleashed a free version of its powerful chatbot onto the world.

OpenAI’s CEO Sam Altman has been vocal about the so-called ‘existential risks’ of AI and was a signatory – alongside Microsoft and Google executives who have rushed to strap chatbots to the internet – to the latest letter that claims AI is as dangerous as nuclear weapons.

Speaking to Information Age, Professor Miller said the types of real-world harms caused by AI systems aren’t likely to be felt by the Silicon Valley elite.

“They disproportionately affect poor people, people of colour, people from lower socio-economic backgrounds, not tech bros,” he said.

Australians are all too aware of the potential effects of AI and automated decision-making can have on marginalised people thanks to the Robodebt scandal.

Robodebt was an illegal program that caused large scale harm using a simple algorithm that was ultimately the result of removing humans from decision-making, not because a rogue computer wanted to hurt welfare recipients.

Content creators are also at risk of losing their livelihoods thanks to AI companies that have scraped the internet of their life's work – writing, photography, visual art, even open source software – packaged it up into data models and re-sold it as an AI tool.

Professor Miller thinks AI companies – like those whose executives signed the latest statement – have a “commercial motivation to overhype this technology”.

“We hear them say ‘it’s so smart it’s going to outsmart us all’ but this is their technology they’re talking about,” he said.

“The more they overhype it, the more people will buy into the fact that this type of technology is worth investing in and is worth using.”