Government agencies’ high-level descriptions of their artificial intelligence (AI) use cases in new mandatory registers have failed to quell developers, auditors and disability and privacy advocates’ concerns about the inability to assess their impacts.

The “transparency statements” of the 55 non-Corporate Commonwealth Entities that met the 28 February deadline were only required to provide “an overview of their intended AI use”, Lucy Poole, the Digital Transformation Agency (DTA) officer overseeing the policy, told Information Age.

Poole said agencies’ responses would help “build public trust…and confidence”, but the broad summaries raised more questions than answers about whether or not and how AI impacts high-stakes systems used in disability support decisions, tax compliance, visa programs and other government functions.

Founder and chair of Digital Rights Watch Lizzie O'Shea noted the lack of detail about models’ training data and said that the transparency standards should “address the data inputs” and not just “outputs”.

“Such models are fundamentally built upon mass invasion of privacy and any policy intervention must reckon with this.”

Technology consultant Justin Warren, a critic of the discontinued RobotDebt scheme and Centrelink’s more recent AI trials, added that agencies’ responses exemplified their failure to realise “trust is earned…too few agencies want to do the work to earn that trust.”

AI in visa programs

The AI “transparency statement” from The Department of Home Affairs’ (DHA), an agency using AI for “predicting risks in visa program[s]”, did not clarify whether AI is incorporated into “prioritisation and allocation tools" used to more efficiently process its backlog of character-based migration decisions or “the use of computerised decision-making” that bans certain “visa holder[s]” from “undertaking critical technology-related study” posing a national security risk.

However, Freedom of Information (FOI) documents have previously confirmed DHA uses AI for "identifying indicators of fraud in documentation supplied to support visa applications".

Parliament’s powerful audit committee recommended on Tuesday that more “detailed questions on the use and understanding of AI systems” be incorporated into…[the] annual APS census”, noting “AI systems may not fully grasp…decisions directly impact[ing] individuals’ lives – for example, in areas like welfare benefits, criminal justice, or immigration.”

Concern about how AI could impact disability support

The agency administering the National Disability Insurance Scheme (NDIS) was one of several that was “encouraged”, but not mandated, to “apply this policy”.

The National Disability Insurance Agency (NDIA) is yet to follow the lead of other exempt agencies, like the Australian Federal Police, in voluntarily publishing an AI Transparency statement.

NDIA posted a tender last month for technology to “accurately predict support needs” specifying that “data derived from the assessment tool(s)” enables “scores for each domain and/or support need area” and clarified to software providers that they could propose “artificial intelligence” and/or an “AI capability” in their responses."

Cat Walker, a disability advocate in her third year of an FOI case for secret systems used in calculating NDIS plans for autistic people said that she was disappointed NDIA was “only ‘encouraged’ to apply the policy.”

Ministers responsible for NDIS have long said that emerging technology only informs NDIS decisions without making them, but Walker told Information Age that “discretion isn’t enough if we can’t interrogate business rules for ethical issues, bias or other faulty assumptions.”

Australian Taxation Office’s 43 AI models

The 43 AI systems that the Australian Taxation Office (ATO) operates, or is developing have been under scrutiny since a February audit was more critical of ATO’s AI safeguards and governance than the other 19 agencies assessed.

The ATO’s transparency statement provided only broad descriptions of AI’s role in “tax time nudging", “fraud detection” and “communications content”.

Founder of accounting and finance software platform TaxTank Nicole Kelly told Information Age that DTA should have required “more comprehensive details from agencies”.

The ATO has “extensive data matching programs” with both agencies and the private sector “but there is no transparency” over whether or “how AI is being applied across this data or the security risks involved,” Kelly said.

“Without clear disclosures on the ATO’s AI datasets, taxpayers cannot know if the data is fair or accurate.”