Revenue NSW’s use of artificial intelligence to automatically garnish the bank accounts of delinquent fine recipients was unlawful, the NSW Ombudsman has warned, after an extensive review in which it advised government agencies to consider seeking Parliamentary approval before automating “important administrative functions”.

The new report was occasioned by numerous complaints from NSW citizens who had found their bank accounts were being depleted – without their knowledge or consent – by an AI-powered revenue recovery system that “was having a significant impact on individuals, many of whom were already in situations of financial vulnerability”.

Between 2016 and 2019, automatic calculations of citizens’ liabilities to state government agencies – on whose behalf Revenue NSW collects outstanding fines and other debts – had resulted in garnishee orders that had left thousands of people horrified to find their bank accounts depleted or empty.

“These people were not complaining to us about the use of automation,” said NSW Ombudsman Paul Miller in releasing the new report. “They didn’t even know about it.”

Service NSW had responded to the concerns in 2019 by working with the Ombudsman’s office to make the system operate “more fairly” by including human oversight, the report noted, but “we still had questions as to whether Revenue NSW’s system of garnishee automation was legally consistent with its statutory functions.”

Independent legal analysis confirmed that the original automated process used to garnishee citizens’ bank accounts was “unlawful”, the Ombudsman said, and even the changes Revenue NSW implemented “do not appear to have fully addressed the legal concerns.”

The experience reflects the potential repercussions of government agencies’ ongoing lack of transparency in the use of automated systems – which, in NSW alone, are already known to be used in fines enforcement, policing, child protection, and driver license suspensions.

Government bodies often use manual reviews to avoid creating undue financial stress on those owing fines, with the coronavirus pandemic one recent example of where NSW government bodies had acquiesced to calls for a lighter touch.

Yet a review of published government policies about the use of AI suggests that agencies are giving “inadequate attention” to legal issues around their use of AI, the Ombudsman said, warning that assessment of AI use cases in government “must also be assessed from an administrative law perspective.”

“We believe that this assessment must be central to the use of this technology…. The requirement for proper authorisation means that statutory functions are not and cannot be directly given or delegated to a machine.”

Resolving AI’s trust deficit

Despite years of hype, we are still far from a consensus about the government’s use of AI.

Australians had the most polarised opinions about AI out of five countries studied in a recent KPMG-University of Queensland study that found just 32 per cent were willing to rely on information provided by an AI system – and that 41 per cent were unwilling to do so.

Fully 70 per cent of respondents said they accept or tolerate AI, but few approve or embrace the technology and 36 per cent said they have low or no confidence in governments to develop the technology in the best interests of the public.

NSW has been actively working to find new uses for AI across its processes, launching a formal website soliciting citizens’ opinions – but revelations that its use of AI was unlawful bode poorly for a government that has called building public trust a foundation stone of its AI adoption.

In the wake of the Revenue NSW fiasco – dubbed RoboDebt 2.0 by some – creation of the state’s formal AI Strategy recognised the need for AI “to be developed responsibly and with a clear focus on outcomes so that the community has trust that the technology is being used appropriately, and that any unintended consequences are avoided or remedied quickly and effectively.”

Trust issues are likely to persist, with Gartner recently finding that 16 per cent of government agencies had already deployed AI based on machine learning-supported data mining – and that 69 per cent expect to implement the technology within the next three years.

Even government employees remain sceptical, with just 53 per cent agreeing that AI technologies provide valuable insights to help them in their jobs.

Just 44 per cent of respondents said AI improves their decision-making, with 11 per cent arguing that AI makes more errors than humans do.

Ultimately, Miller said, the review has highlighted the importance of caution and transparency as a growing number of agencies integrate AI into their processes.

“As an integrity agency, our concern is that agencies act in ways that are lawful, that decisions are made reasonably and transparently, and that individuals are treated fairly,” he said.

“Those requirements don’t go away when machines are being used.”