The agency responsible for administering Centrelink has revealed few details about the artificial intelligence models it trialled late last year to predict fraudulent welfare claims and which debts to priortise recovering.

A Services Australia spokesperson told Information Age that the AI assessing potential debts had not been “deployed into ongoing business processes" and that the fraud detection AI was still “being thoroughly tested before being deployed more broadly”, but welfare rights advocates have called for the release of audits of the systems’ accuracy and risk of bias.

“Services Australia should be required to provide evidence that the system does what it is supposed to do,” IT professional Justin Warren, who won a seven-year Freedom of Information (FOI) case with the agency for the release of documents related to the RoboDebt scheme, said.

The use of AI was “trialled to enhance work allocation to staff, specifically debt-related activities that are likely to result in a 'finalise no debt' outcome", according to documents released to an anonymous FOI applicant in August.

Another “predictive model” was developed that “complements existing fraud strategies by assisting fraud analysts in identifying high risk claims.”

The human-in-the loop debate

The FOI documents said that the models were “non-decision making” and the spokesperson added they do not "make decisions on whether a claim is fraudulent or not" or "determine any debts or complete any debt actions”, which suggests a system more like Robodebt’s predecessor: Online Compliance Intervention.

Welfare recipients’ reported income was being compared to their averaged annual tax records to automatically identify discrepancies since 2001, but the discrepancies were manually assessed, until Robodebt launched in 2016, which sent debt letters without pre-checks.

However, digital rights advocate Asher Wolf said the use of the term “non-decision making” could be highly misleading.

“Calling a predictive model ‘non-decision making’ is an oxymoron, it is a contradiction in terms.

“Predictions always involve decision-making: which theory to investigate, which target to pursue.”

AI debt prioritisation

The Services Australia spokesperson said that the “debt prioritisation pilot…involved a process to make workload prioritisation more efficient.”

“By prioritising these potential debts, staff can review them and action them right away.”

“The pilot is designed to assist staff; customers do not interact with the system.”

Both tools are defined in the FOI document as “machine learning”, suggesting their rules were set by feeding historical data to self-learning algorithms.

This contrasts with the pre-coded rules of the tools used in the RoboDebt scheme that raised more than half a million inaccurate Centrelink debts.

AI fraud detection

Services Australia’s Fraud and Corruption Control Plan 2025–2026 said last month that its “detection processes…include: advanced data analytics including machine learning to detect potential fraud and corruption".

“The model flags claims with potential risks to be reviewed and analysed by one of our trained fraud analysts.

“A fraud analyst will determine if a claim requires further investigation for potential fraud, otherwise the claim will be processed under usual business processes.”

The spokesperson declined questions about what the model associated with fraud, whose data it was trained on, and the data types it was trained on.

“To ensure the integrity of our operations, we don’t provide specific information on how our detection capabilities work.”

Wolf said that more transparency was needed to ensure that Services Australia’s systems would not “automate…past injustices.”

“To protect against human rights abuses, algorithmic decision-making in high-risk sectors – such as human services – should be transparent, open-sourced, open to public scrutiny and have easily identifiable and accessible avenues for remedy if errors occur.

“Are Services Australia's machine learning models choosing to target geographic regions that have historically been subject to high levels of welfare surveillance of Indigenous communities?

“Or areas subject to the disastrous ParentsNext program or punitive Cashless Welfare Cards? Or will the algorithms directly target individuals who have previously received erroneous Robodebts?”

Services Australia built a machine learning tool in 2014, which has since been discontinued, that “mined ten years of customer data in the Centrelink Mainframe…to identify demographic characteristics associated with welfare claimants that had been overpaid.”

However, the spokesperson said the current tool is used to detect fraud whereas the previous one was applied to the broader categories of non-compliance and debt recovery.

UPDATE 04/03/25: Services Australia has further defended its trial of AI models and provided more detail on their use after Information Age's story was tabled in Senate Estimates.