Artificial intelligence has the potential to improve the wellbeing of humanity, from doing away with tedious or dangerous jobs that could be handled by computers or robots to the vast potential for it to lead to a significant improvement in human lives

That’s according to Professor, and current Shadow Assistant Minister for Treasury and Charities, Andrew Leigh.

Despite this AI potential, however, Leigh considers superintelligence to be an existential threat to humanity.

“The more I've read, I’ve been persuaded we shouldn't just benignly assume that superintelligence will have the same goals as humanity,” he tells Information Age.

“We need to think carefully about how to encode human values.”

Leigh’s new book, What’s The Worst That Could Happen, canvasses the existential risks facing humanity that require long-term thinking and planning to avoid the worst of their effects.

His list ranges from environmental to political and includes climate change, pandemics, populism, nuclear threats and superintelligence, or what’s been dubbed “the last invention”.

While it can turbocharge productivity and living standards, without having appropriate rules and values, AI superintelligence lacks suitable human-impact guidance.

“My concern, and what's the worst that could happen, is that long-term challenge about an artificial intelligence that does outstrip us; presumably at that point, overcoming the sorts of biases, but potentially creating much more catastrophic risks for the species,” he says.


Andrew Leigh. Photo: Supplied

Why AI superintelligence must include human values

Leigh has a PhD in public policy from Harvard and comes to politics from academia where he was a professor of economics at the Australian National University.

His interest in economics is more than theoretical.

His podcast, The Good Life, for instance, features discussions with different people about living meaningful lives.

“Simple notions such as ‘let's just maximise human happiness’ or ‘let's just minimise human suffering’ could easily go wrong.”

With artificial intelligence, Leigh writes, different from every other risky technology, is its runaway potential.

“Once a superintelligence can improve itself, it is unstoppable. So, we need to build the guardrails before the highway.”

This should take the form of essential criteria that guide development for the right purpose.

Altruism, for example, is one of the human values that Leigh believes should be part of the essential makeup of any AI system, especially when it comes to high-powered superintelligence.

“The idea that artificial intelligence should be modest and curious, and that it should work steadily to figure out humanity's underlying objectives is the right one,” he says.

Can the world find the cooperation needed to avoid the worst of AI?

Leigh fears the creation of a kind of “global arms race” toward developing the first superintelligence that discards safety in the pursuit of leading innovation.

He sees the world moving toward an ideological divide on AI, with countries such as France and Canada backing an international panel on AI to focus on the ethical concerns (akin to the IPCC) while the US has not supported what it considers a too cautious path hampering innovation.

“Turbocharging the race for artificial intelligence – while dismissing the dangers – increases the odds of the kind of ‘extremely bad (read: human extinction)’ scenario that artificial intelligence experts say has a 5 percent probability of occurring. Reducing the risks of a superintelligence will require more global cooperation, not less,” he writes.

Leigh likes the idea of adopting the programming principles that mandate the machines to be observant, humble, and altruistic.

“This makes the most sense to me, rather than trying to encode every aspect of our of our values.”

The goal is bringing together experts, both in programming and philosophy, to have the right people sitting around the decision-making and planning table.

He notes there’s a strong degree of consensus among AI experts that eventually humanity will design a computer more powerful than the human brain and the machine will then accelerate past us.

“There's also a reasonable degree of consensus that we shouldn't assume that the values of that machine will be the same as ours,” says Leigh.

Humanity’s hope with values-led innovation

An encouraging sign is organisations adopting values statements for AI innovation.

On this, Leigh sees common guiding themes, particularly around the need to embed appropriate human values, avoid creating weapons systems that would be able to kill autonomously, and encourage collaboration over competition.

“Some guidelines also embody Bostrom’s “common good principle,” which states that superintelligence should only be developed for the benefit of all humanity and in the service of widely shared ethical ideals,” he writes.

“The common good principle creates an expectation that the first country or business to create a superintelligence will immediately share it with the rest of humanity,” he notes.

However, humanity’s “last invention” should not be protected by a twenty-year patent, according to Leigh.

And what of the role of government?

The would-be minister believes governments have a responsibility to synthesise their support for innovation with their regulatory duties.

In particular, human values-led AI innovation must be supported through good institutional design that is harmonised, not delegated to different parts of the bureaucracy.

“Smart design can stop today’s artificial intelligence systems from making dumb mistakes and prevent tomorrow’s systems from taking over the world,” he writes.

Ultimately, Leigh is a tech optimist and that includes AI.

“If we have the values right, then it's an extraordinary invention that brings a massive improvement and living standards, which should lead to a huge reduction in global poverty, disease reduction and global suffering,” he says.