Services Australia, the government agency responsible for Centrelink and Medicare, says it is conscious of the risks of artificial intelligence and automation and still faces trust and legislative issues as it outlines a three-year plan for how it will use the technologies to improve its work.
The agency said it was committed to ensuring its use of AI and automation would be “human-centric, safe, responsible, transparent, fair, ethical, and legal” as it released its first Automation and Artificial Intelligence Strategy on Friday.
The strategy was a “crucial” requirement, the agency said, given it now handled around 9 million digital interactions with customers each week.
Services Australia officials defended the agency’s recent trials of AI systems during Senate Estimates in March, after Information Age revealed staffers had tested the technology’s ability to predict fraudulent welfare claims and which debts to priortise recovering.
The testing followed a pause in the use of automation for social security and welfare claims in the wake of the Robodebt scandal, which raised more than half a million inaccurate Centrelink debts between 2016 and 2019.
Services Australia said there was now “a significant opportunity for automation and AI” to enhance its services, given technologies had improved and the agency now dealt with more than 1.1 billion online interactions each year.
It said safeguards guided its use of AI and automation, including experimenting in controlled offline environments, pausing systems which did not meet assurance and governance requirements, and “having a human ‘in the loop’ to check AI outputs, where appropriate”.
The agency said it had also now assigned its general manager of automation and architecture — currently Vanessa Roarty, according to its website — as the official held accountable for Services Australia’s implementation of both AI and relevant policy set by the government’s Digital Transformation Agency (DTA).
Tackling the 'trust deficit’
Services Australia’s chief customer officer, Jonathon Thorpe, said AI and automation could help “modernise and enhance service delivery for both our staff and customers”, but the agency admitted it still faced some barriers in achieving that, including “a trust deficit” with customers, staff, and other stakeholders.
“Limited trust is driven by the real or perceived risks associated with using automation and AI including legal and ethical concerns, misinformation, transparency and explainability, safety and security, industry concentration, environmental impact and job loss,” the agency said.
“… Critically, failing to deploy oversight tools, such as monitoring for accuracy and reliability, codes of conduct, independent ethical review boards, audits and adherence to standards will also impede trust and confidence.”
Services Australia said it would try to make sure its decisions were documented, evidence-based, and explainable, with accountable officers and any options considered also publicly outlined.
When, how, and why data was collected by the agency would also be detailed transparently, with users provided with “opt-out options and ‘off’ switches by design”, it said.
Services Australia's chief customer officer, Jonathon Thorpe, says AI and automation can improve the agency's service delivery. Image: Services Australia / LinkedIn
Navigating ‘lagging legislation’
Services Australia argued another barrier to its goals was “legislation and policy that doesn’t enable the safe and responsible use of automation and AI technologies, particularly at the rate it is evolving and maturing”.
The government has been considering reforms which could provide a consistent framework for government services to use automated decision making, following a recommendation by the Robodebt Royal Commission.
Services Australia said it would work to identify “lagging legislation” and “work with stakeholders and strategic partners to reform it”.
“The complexity and interdependencies within existing laws and regulations can present a significant barrier to reform,” it said.
The agency said it would also work to improve its workforce’s capabilities by training all staff in AI fundamentals and supporting additional training for “those responsible for the procurement, development, training and deployment of AI systems”.
AI and automation technologies still had the potential to “enhance efficiency, support more-informed decisions, and elevate the overall experience for the people we serve”, Thorpe said.
“While we’re optimistic about the potential benefits, we understand the barriers we may face implementing initiatives that create real value, and we are conscious of balancing the risks,” he said.
“This is why we are establishing robust and responsive governance, assurance, and decision-making artefacts that adhere to whole of government frameworks and policies, best practice and relevant standards.”