The potential harm caused by biased artificial intelligence needs to be addressed as machine learning systems could lead to unlawful and discriminatory decision-making, the Australian Human Rights Commission has said.
Its new report, Addressing the problem of algorithmic bias, examines the interaction between human rights – specific equality and non-discrimination – and artificial intelligence in an attempt to move the conversation forward on proper governance and oversight of these powerful systems.
“Human rights should be considered whenever a company uses new technology, like AI, to make important decisions,” said Human Rights Commissioner Edward Santow.
“Artificial intelligence promises better, smarter decision making, but it can also cause real harm. Unless we fully address the risk of algorithmic bias, the great promise of AI will be hollow.”
In order to illustrate its point, the report’s authors present five simulated scenarios where algorithmic bias may play in existing business decision-making.
These scenarios were based on a simulated AI system for choosing which customers might be more profitable to an electricity company.
They demonstrate how inequality can be baked into an algorithm through the repetition of existing inequal structures, inaccurate or outdated data, or incomplete data that does not account for underrepresented people.
Human prejudice in the data collection phase, for example, could lead to people from specific racial groups being singled out as ‘unprofitable’ by an algorithm.
Leveraging data to make business decisions has lead to a boom in data science locally and abroad, but CEO of the Consumer Policy Research Centre, Lauren Solomon, said Australians have a right to know that their personal information is being used responsibly.
“Strong protections are needed for Australian consumers who want to know how their data is collected and used by business,” said Solomon.
“Businesses should ensure the lawful and responsible use of the decision-making tools they use and this new technical paper highlights why that is so important.”
Crucially, the Human Rights Commission points out where AI systems could contravene existing Commonwealth human rights laws such as the racial discrimination and sex discrimination acts.
“Because algorithmic bias can result in unlawful activity, there is a legal imperative to address this risk,” Santow said.
“However, good businesses go further than the bare minimum legal requirements, to ensure they always act ethically and do not jeopardise their good name.
“Rigorous design, testing and monitoring can avoid algorithmic bias.”
The report includes a series of checklists for organisations and data scientists to go run through when designing and implementing AI-powered decision-making process.
Rules and regulations specific to artificial intelligence tend to be lacking as the legislation lags behind technological advancement.
More than a year ago, some major Australian businesses said they were beginning to incorporate the government’s eight non-binding AI Eithics Principles into their AI development – but there is no AI regulator checking on compliance.
And Standards Australia has called for more direct local involvement with the creation of international AI standards, recognising the process as a way to create responsible AI locally and abroad.
Unfortunately, the rush to develop and experiment with AI has already led to examples of discriminatory AI in the wild.
Twitter was met with outrage in September when users discovered its algorithm that chooses photo previews in tweets privileged white people’s faces over black people’s faces.
Identifying and untangling bias can be a near impossible task as commonly used training datasets could contain major faults that go undetected for a long time such as when MIT removed a large data set of 80 million small images in July after AI researchers discovered the dataset – which had been used to help train computer vision systems – contained racial and sexist slurs.