The government’s automated welfare debt recovery process, commonly known as ‘Robodebt’, was an unmitigated failure that exemplifies the need to understand the ethics behind artificial intelligence decision making, says a new report.
In his forward to The Ethics and Risks of AI Decision-Making report published by ACS, chair of the society’s AI Ethics Committee, Peter Leonard, said Robodebt offers a glimpse at what goes wrong when powerful institutions implement automated systems poorly.
“Robodebt illustrates how a calculation that is algorithmically correct when properly applied can be in error, unfair and illegal when applied more broadly, if it’s applied without due consideration of errors that can arise and without appropriate human intervention and consideration,” Leonard said.
“Robodebt is a very public example, but it’s just one of many automation-assisted decision-making processes that have blown up as illegal, unfair or biased, causing reputational damage to the organisations that deployed them.”
The system matched income data from the Australian Tax Office (ATO) with income data welfare recipients had reported to Centrelink.
It averaged the amounts and raised debts against people who had ‘discrepancies’ in the data.
Earlier this month the Federal Court slammed the Robodebt recovery scheme as “a massive failure of public administration” as it handed down a $1.8 billion settlement against the government.
Fairness and accountability
Leonard said artificial intelligence, machine learning, and other automated systems like Robodebt need to be carefully designed to consider “fairness, accountability and transparency”.
“But many organisations fail to ensure that the automation component is appropriately deployed within a decision-making chain involving people, so that there is not excessive reliance on it or use it in inappropriate contexts.”
The Ethics and Risks of AI Decision-Making report notes that, as AI technologies continue to progress, there is a need to better articulate the importance of risk mitigation and appropriately accountable decision-making processes in order to help organisations understand the potential effects of automated systems.
It notes Australia does not yet have regulation to ensure AI is developed with the principles of ‘good AI’: fairness, equity, accountability, and transparency.
“Formal regulation may be a necessary and proportionate measure to assure good outcomes, at least in those contexts where the deployment of automation poses sufficiently high risk of exposure to harm, or where the level of uncertainty as to assessment and mitigation of risks or harms is unacceptably high,” the report said.
“Interim regulation may be appropriate for applications in particular industry sectors or particular use settings where the magnitude of risk of harm is such that a precautionary principle should be applied.”
Professional responsibility
The AI ethics report also recognises a need for the discussion around AI ethics to extend beyond the computational systems, into the organisations and management structures that put them in place.
“Many non-technical executives, including risk management specialists, are still developing the necessary skills and competencies to ensure that alignment,” it said.
“The data inputs, analytics capabilities and technologies driving automation applications are moving faster than the skills and competencies of many senior managers.”
Crucially, the report identifies data scientists and IT professionals as “straddling both sides of the divide” when it comes to designing, testing, and implementing automated systems.
Craig McDonald, an Emeritus Professor of Informatics at the University of Canberra, argued the ICT professionals involved in the Robodebt debacle ought to have taken more professional responsibility for the failed project, in a controversial article he wrote for Information Age last year.
The report said it is very important for IT professionals to have the “tools and methodologies” needed to navigate the move toward greater AI decision-making.
“As we develop more complex AI systems, it’s critical that the new generation of IT professionals understand their responsibilities in this regard, and are prepared and capable of guiding their organisations through the transition,” the report said.
The Ethics and Risks of AI Decision-Making report is available now.