Australian government agencies are falling short of transparency expectations when it comes to explaining how they use automated decision-making technologies, according to a new report from the nation’s information watchdog.
Released on Wednesday by the Office of the Australian Information Commissioner (OAIC), the report followed an October review that assessed how transparent federal agencies were about their use of automated decision-making (ADM) technology.
With ADM referring to the use of a computer program to automate a decision-making process, the report showed several government agencies were unclear on their use of ADM – including cases in connection to AI.
The report checked whether 23 de-identified Australian government agencies complied with their publication requirements under the Information Publication Scheme (IPS), which encourages agencies to release information about their conduct to the public proactively.
Though each agency “is authorised to use ADM under various legislative statutes”, just 13 agencies mentioned ADM in their IPS information.
Four of those agencies – the Australian Taxation Office, Services Australia, the Department of Health, Disability and Aging, and the Department of Veterans' Affairs – explicitly disclosed ADM use in decisions that affect the public.
The other nine instead referenced or inferred the use of ADM, often in connection to AI.
“However, they did not specifically say whether they used ADM in any of their decision-making or recommendation processes,” the OAIC wrote.
The OAIC found these nine agencies often “implied” ADM may be in use by, for example, mentioning it in a corporate plan or having an inferred reference in their AI strategy, but the watchdog could not “ascertain if this was indeed the case”.
The four agencies that did disclose the use of ADM were meanwhile “not clear about how they used it”, while some 74 per cent of agencies were not able to be identified as using ADM.
The OAIC report arrived after the government’s National AI Plan promised legal “consistency” for ADM as the use of AI expands, alongside chief AI officers who will drive adoption in each agency.
Are agencies hiding their hand?
Further to IPS statements, the report also examined agencies’ websites and AI transparency statements.
“Our threshold was whether a member of the public, who wanted to know if an agency was using ADM, could reasonably do so by performing relatively simple searches on the agency’s website,” read the report.
Using this methodology, the commissioner found it was “likely” ADM was in use at two agencies despite it not being appropriately disclosed.
In an anonymous case study, one agency mentioned in a data strategy report that it was “embracing automation and artificial intelligence” which allowed it to make decisions based on data in a timelier manner – though the agency did not explicitly state on its website if and how it used ADM.
“It does not elaborate on how these decisions are made, and whether any decisions made by the agency are based solely on automated processes,” the OAIC wrote.

Nine government agencies which 'implied' they used ADM had not publicly confirmed so, the OAIC said. Image: Shutterstock
We don’t need another Robodebt
The report comes after years of fallout from the federal government’s Robodebt scandal, where an automated debt recovery program wrongfully accused welfare recipients of owing the government money.
With the government having since committed to $587 million in compensation to victims, the OAIC specifically noted “public examples of failures of oversight of ADM” such as those outlined by the Robodebt Royal Commission had “highlighted the need for transparency about the use of ADM by government”.
“The benefits of utilising ADM technology in government will only be realised if risks are appropriately mitigated and trust is built with the Australian community,” the OAIC wrote.
Indeed, the commissioner said Robodebt “relied heavily” on ADM for its ‘income averaging’, while Information Age last year found staff at Centrelink and Medicare agency Services Australia had tested AI’s ability to predict fraudulent welfare claims.
The agency has since outlined a three-year plan which aims to ensure its use of AI and automation is “human-centric, safe, responsible, transparent, fair, ethical, and legal”.
Commissioner calls for transparency
The OAIC ultimately recommended all agencies authorised to use ADM publish as such in their IPS, and clarify whether they “utilise ADM to provide information and services to the public”.
Other recommendations included clear statements of the types of ADM agencies used, including technologies from “simple calculators to machine learning”.
The report further called for the publication of lists of decisions ADM is used for, alongside easy-to-understand examples.
As a result of the report, the OAIC will update Freedom of Information (FOI) guidelines so ADM is expressly included as an example of ‘operational information’ – which agencies are specifically required to publish.
“Information about decision-making and the exercise of agencies functions is important information for the Australian community,” said information commissioner, Elizabeth Tydd.
“It improves integrity, accountability, and trust.”