An American tech company has released a new tool it says will help to flag discriminatory behaviour by artificial intelligence services and algorithms before they go out to customers.

Digital transformation software company Pegasystems has unveiled its Ethical Bias Check, which it says will help businesses better practise the responsible use of artificial intelligence in selling products to customers by warning of any potentially discriminatory behaviour before reaching the public.

While AI is sometimes touted as a great equaliser – taking human biases out of the decision-making process – it has been shown time and time again to merely reflect these biases, with numerous examples of discriminatory behaviour made by algorithms.

The Ethical Bias Check works by flagging potentially discriminatory offers and messages.

The company says that AI models unintentionally “learn” biases over time, especially in terms of age, ethnicity or gender, and these could lead to discriminatory selling behaviour which can damage customers and the company.

Pegasystems thinks it has developed a technology that could go some way to addressing these significant issues, especially with the use of AI to deliver products and services to consumers.

The tool allows a company to set its testing threshold, then sends alerts when the bias reaches the unacceptable level that has been set, such as when a customer is offered something that skews too far towards or away from specific demographics.

This then allows the company to determine what element of the algorithm contributed to this discriminatory behaviour and adjust it to provide a fairer offering to the customers, Pegasystems said.

“As AI is being embedded in almost every aspect of customer engagement, certain high-profile incidents have made businesses increasingly aware of the risk of unintentional bias and its painful effect on customers,” Pegasystems vice president of decisioning and analytics Dr Rob Walker said.

“With the Ethical Bias Check, we’re empowering businesses with tools that help reduce AI bias to improve how businesses interact with customers and increase customer lifetime value.”

There are other products in the markets that claim to do similar things as Pegasystems’ offering.

The company said most others require separate bias tests for each individual offer to be made to customers, whereas the Ethical Bias Check can analyse an entire engagement strategy and scan all decisions and pitches for potential bias.

It also says its technology is more flexible, allowing users to adjust settings when it’s more acceptable, such as when offering healthcare to more senior members of the public.

The protection against discrimination is also offered on a continual basis, the company said, with bias testing to become just part of the normal sales strategy.

Facebook’s technology to serve advertisements to its users on the platform has also been the subject of claims of discrimination based on age and gender.

Last year, the tech giant was sued by the US Department of Housing and Urban Development over its practice of letting advertisers target their ads based on race, gender and religion.

Facebook has since scrapped this feature, but still faces claims that its algorithm still discriminates based on these factors and demographics.

In her book, Invisible Women: Exposing Data Bias in a World Designed for Men, Caroline Criado Perez said that hopes AI would make the world a fairer place have been dashed, pointing to several examples of voice recognition technology being unable to identify female voices and algorithms preferring male CVs over women.

“You would hope that artificial intelligence would improve things,” Criado Perez told ABC News last year.

“Humans are so notoriously biased, you would hope that computers would be better.

“But unfortunately humans are the ones making the algorithms, and humans are the ones feeding those algorithms data: we are creating biased algorithms based on biased data.”

It’s happened before

Late last year, regulators in the US announced an investigation into the algorithm used by the Apple credit card to determine the credit-worthiness of applicants after several complaints it offered a worse rating to women than men – despite the same information being provided.

The use of artificial intelligence and tools like facial recognition to assist with law enforcement has also led to accusations of discrimination.

An algorithm used in America to determine sentencing for offenders based on their likelihood of reoffending faced accusations of racial bias, with the system predicting that black defendants posed a higher risk of recidivism than they actually did.

Facial recognition has also regularly been found to be more effective in identifying white men than other people, leading to the potential for false charges and evidence. Three forms of the technology used for law enforcement facial recognition were found to correctly identify the gender of a person from a photo 99 percent of the time but only when these were white men.

These issues have been long-standing. In 2015, it was found that Google image search results for “CEO” saw just 11 percent of the results being women, despite women accounting for 27 percent of CEOs in the US at that time.