What will AI do to us?

One theory is a horrifying destruction of humanity by an agentic AI that doesn’t share our values.

That’s how the AI doomers see the future: some clever but stupid AI sets out to achieve a simple task with maniacal efficiency and destroys us all in the process.

Short of that gloomy outlook is a range of scenarios, good and bad.

If it doesn’t destroy us, will it help us?

The Australian government is manoeuvring as we speak to try to answer this question.

Will AI be good? How can we help it help us?

The latest agency to get excited about AI is the Productivity Commission.

Everyone knows Australia’s productivity is in an enormous slump.

Can AI haul us out of it?

Labour Productivity is the ability to make more outputs – or more valuable outputs – from the same amount of labour.

If you give a person a spade, they become much more productive at digging and you have increased labour productivity.

If you give them a big yellow excavator, they become more productive still.

In both these cases you have added capital, and that is a major way in which labour productivity is lifted.

(We also measure ‘total factor productivity’, which looks at the ability to get output from all inputs, but labour productivity matters most to wages and is the key factor in living standards.)

You can certainly see why the Productivity Commission might think AI would help Australian labour productivity.

A worker with AI at their fingertips may well be able to get more done than a person without it.

They were doing a big round of public engagement on AI and whether it can be used to help the economy make more from less and seeking views on whether we can use AI to make our economy more productive.

The consultation has already surfaced a range of AI uses.

Included in it, as just one small example, is designers for video productions “generating non-copyright artwork as set decoration” and “generating an actor’s stills shoot image without the actor.”

But there’s risk: if a company uses an AI and it fails, who is to blame?

“[L]eading AI labs that use Terms of Service (TOS) to push risks onto businesses and users who have no practical ability to engage with or mitigate these risks,” explains a submission from AI research outfit called Good Ancestors.

They argue some risk should stay with the company that made the AI, giving as an example, car fleets.

Companies own the cars, but responsibility for meeting safety standards still rests with the manufacturer.

“Where there is a fault, say an airbag that can eject metal fragments, there is little doubt that the fault needs to be rectified and who has the obligation to do so.

“These confidence-building frameworks do not exist for AI.”

The reality of making rules for AI is that governments work slow and AI moves fast.

Some things we believed about AI six months ago – 'it can’t even render fingers!' – are no longer true, and what we believe today is unlikely to be true in six months.

Any law risks missing the pointy issues, or being sufficiently general in nature so as to not apply to AI specifically.

The Productivity Commission recognises this: “[M]any potential harms of AI are already covered by general laws and regulations,” they say.

“Further, experience shows that poorly designed, technology-specific regulation protects incumbents from competition and creates uncertainty for those who develop and use new technology. “

Is law useful? Can we help tame AI to be a help and not a hindrance?

The Productivity Commission will publish its interim report by August and will call for submissions thereafter.