There is a “serious risk” of discrimination when AI is used in the hiring process, with the technology having the potential to “breathe new life” into long-running biases, according to new Australian research.

The use of AI, such as to trawl through resumes and applications or analyse video interviews, is now commonplace in recruitment.

An estimated 30 per cent of Australian organisations and more than 40 per cent around the world use predictive AI systems in recruitment, and these figures are expected to grow rapidly.

But there is an “urgent need” for the federal government to reform laws to protect from discrimination arising from this use, which is a real and serious risk, according to a new paper by lawyer and postdoctoral fellow at the University of Melbourne, Dr Natalie Sheard.

Those applying for a job using AI in this process who are not white, male and with an American accent may find themselves discriminated against and unable to land work, the paper found.

Serious risk of discrimination

The paper focuses on algorithm-facilitated discrimination, looking at the use of CV scanners, candidate assessments and video interviews in the hiring process.

Based on research and interviews with recruiters, researchers found the use of AI in recruitment “creates serious risks of algorithm-facilitated discrimination” because of a range of factors.

This can lead to “harm at unprecedented speed and scale” and can serve to “lock out” particularly groups of people from the workforce, the paper said.

The research found that most AI systems used in hiring processes were purchased “off the shelf” from tech providers, with the data from these systems combined with the company’s own information.

This runs the risk of repeating historical and current hiring discrimination, Sheard said, and may mean that the data the tool is making a decision based on is not representative of the unique demographics of the place it is operating in.

US-based tech company HireVue offers one such AI recruitment tool.

Just 6 per cent of its data from job applicants come from Australia and New Zealand, while nearly 80 per cent are from North America.

More than a third of the data comes from job applicants who are white, according to the research.

“The risk of discrimination is therefore high when the HireVue system is used by employers in countries not adequately represented in the training data, as it is likely to be inaccurate at assessing many segments of those populations,” Sheard said.

HireVue also uses a third-party provider to convert audio from job interviews into text, and reports an error rate in these transcriptions for US English speakers of less than 10 per cent.

But for non-native English speakers with accents from other countries, this error rate is as high as 22 per cent.

An ideal candidate ‘on steroids’

Factors such as these means that using AI in a job hunt may lead to a particular type of person, from a particular country, with a particular way of speaking or writing, may be favoured above a more qualified candidate without those qualities.

“This ideal candidate, in a video interviewing system, becomes someone who displays the facial movements of the white-skinned men who are represented in the training data, and speaks using the ‘pace, volume and diction’ of standard American English,” the paper said.

“In a text-based assessment system, the ideal candidate becomes someone who demonstrates the typing patterns or the speed and accuracy of non-disabled workers.”

Sheard called for immediate action from the federal government to ensure discrimination laws are up to date when it comes to AI, and that transparency is required from the companies providing and using these tools.

“If we do not want disadvantaged groups to be subject to algorithm-facilitated discrimination we need to take urgent action,” she said.

Discrimination in AI hiring processes has been noted for more than a decade.

In 2014, Amazon scrapped the use of its AI hiring model that was based on the CVs from applicants for software developer positions over the previous 10 years because it was found to discriminate against women due to it being a male-dominated field.

In late 2020, researchers at the University of Melbourne found that hiring algorithms at the time had in-built biases in favour of men, due mainly to humans imparting their own biases on the systems.

Earlier this year, the House of Representatives Standing Committee on Employment, Education and Training’s Future of Work report urged the use of AI to hire workers to be classed as “high risk” and for it to be banned if there is no human intervention.