Robodebt was a dystopic glimpse at how poorly implemented automation regimes can cause needless emotional stress, financial hardship, and even death, the Royal Commission into the Robodebt Scheme found, and serves as a clear warning that the government must radically change its approach to social services and the use of advanced automated and artificial intelligence systems.

On Friday, the Royal Commission handed down its final report after considering 1,092 submissions, and hearing directly from 115 witnesses between October 2022 and March 2023.

Its findings describe a “crude and cruel mechanism” that was “neither fair nor legal” and which treated everyday people – many of whom were society’s most vulnerable – as if they were criminals.

“In essence, people were traumatised on the off-chance they might owe money,” the report said. “It was a costly failure of public administration, in both human and economic terms.”

Robodebt was a scheme in which the Department of Human Services used PAYG data from the Australian Tax Office to find overpayment among welfare recipients who were obligated to report their income every fortnight.

It used a flawed methodology of averaging total yearly incomes by 26 to get an estimated fortnightly income and, where there was a discrepancy between this number and a recipient’s reported income, would automatically raise a debt against them.

Raising a debt in this manner contravened the Social Security Act and was known to be illegal, yet 526,000 people received debt notices under the scheme.

The human impact of this scheme was disastrous. People “took out loans, depleted their superannuation, or used credit cards” to repay the debts created against them, the Royal Commission noted.

These illegally raised debts are known to have contributed to the suicides of two young men. The Commission said it was “confident that these were not the only tragedies of the kind”.

Prime Minister Anthony Albanese described the scheme as “a gross betrayal and a human tragedy”.

“It was wrong, it was illegal, it should never have happened, and it should never happen again,” he told a press conference shortly after the report was made public.

Along with the Royal Commission’s report that contained 57 recommendations was a sealed chapter of the report, that the public hasn’t yet seen, and which “recommends the referral of individuals for civil action or criminal prosecution”.

The Royal Commission handed that chapter to the Australian Public Service Commissioner, the National Anti-Corruption Commissioner, the President of the Law Society of the ACT, and the Australian Federal Police for further investigation.

We don’t know whose names were in that sealed section, or if they include the former ministers Scott Morrison, Stuart Robert, Alan Tudge, and Christian Porter, who were at various times responsible for the scheme.

“Politicians need to lead a change in social attitudes to people receiving welfare payments,” the Commission said.

“The evidence before the Commission was that fraud in the welfare system was minuscule, but that is not the impression one would get from what ministers responsible for social security payments have said over the years. Anti-welfare rhetoric is easy populism, useful for campaign purposes.”

Politicians largely set the standard of political discourse in this country, the Commission went on to say, and they “need to abandon for good (in every sense) the narrative of taxpayer versus welfare recipient”.

Cruel, disastrous automation

At the core of the Robodebt scheme were decisions to take pre-existing protocols and fully automate them, removing humans from the loop in an effort to raise as many debts – or “compliance interventions” – in a week as the department usually made in a year.

The Robodebt system was “heralded as a technological triumph” by the government of the day, the Commissioner said, yet this automation – and the decision to remove human intervention – “was a key factor in the harm it did”.

“The scheme serves as an example of what can go wrong when adequate care and skill are not employed in the design of a project; where frameworks for design are missing or not followed; where concerns are suppressed; and where the ramifications of the use of the technology are ignored,” the commission said.

It went on to warn that future government programs incorporating “more sophisticated AI and automation” could have “even more disastrous effects” that are amplified by systems that are more complex – and thus harder to understand how and why they go wrong – unless the government commits to important reform work.

As such, the Commission made two recommendations specific to automated decision-making.

The first is for specific legislative reform that would remedy the Commonwealth’s existing method of amending legislation as needed which the Commission described as “patching over problems rather than addressing the fundamental need for a consistent approach”.

What the Commission wants is a clear set of requirements for automated decision-making in government services such that fairness and usability are embedded into future programs.

“There should be a clear path for those affected by decisions to seek review,” the Commission said. This would mitigate the bureaucratic nightmare that was trying to navigate Robodebt.

For the sake of transparency, “departmental websites should contain information advising that automated decision-making is used and explaining in plain language how the process works”.

And “business rules and algorithms should be made available, to enable independent expert scrutiny”.

The Commission also recommended the government either set up a new agency, or empower an existing one, to “monitor and audit decision-making processes with regard to their technical aspects and their impact in respect of fairness, the avoiding of bias, and client usability”.

A similar national body has also been suggested by AI safety advocates.

Automated decision-making was mentioned in the Privacy Act review report which recommended greater transparency around how personal information is used for automated decisions.

Digital rights advocates have argued for the government to take a stricter approach, however, one that is more in line with Chapter 22 of the EU’s General Data Protection Regime which gives people rights “to not be subject to a decision based solely on automated processing”.

The Attorney General’s department has not yet publicised its response to the Privacy Act review.