With over 1.6m Australians back on the job market thanks to the COVID-19 pandemic, there will be more competition than ever for that dream role once the economic fog lifts.
But if you’ve ever um’ed and ah’ed your way through a job interview, you’re in luck: LinkedIn has joined the ranks of recruiters tapping artificial intelligence (AI) technologies to deliver a virtual interview ‘coach’ designed to rate your performance before it actually matters.
Even as it launches a Video Intro feature that helps recruiters rapidly vet recorded video answers to common questions, LinkedIn has also bundled a variety of AI-powered voice and speech-analysis tools into a new tool that analyses the way you present yourself.
The tool measures factors such as how quickly you’re talking, how frequently you use filler words or “sensitive” phrases, and so on.
Video interviews from recruiters offer guidance through the process, helping you practice answers to commonly asked questions over and over again – like talking in the bathroom mirror, but with better feedback.
Dealing with an unemployment surge
AI is already recognised as a game-changer in the recruitment industry, with recruiters lauding the technology not only for its ability to slash the time spent recruiting candidates – but also to help proactively identify and surface potential candidates that may not even be aware of a particular position on offer.
Staffing platforms like Talview and Autoview integrate AI into recruitment bots and video interviews; skills-testing tools like Vervoe use AI to evaluate candidates’ performance against standard tasks; and candidate-matching tools like Ideal trawl a company’s in-house database of CVs to identify candidates with the skills needed for a new position.
Yet AI-based candidate vetting is increasingly proving to be a lifesaver for recruiters overwhelmed by the sheer volume of candidates applying for any given position.
Telstra used such an approach earlier this year, when COVID-19 lockdowns interrupted its offshore contact centres and forced it to create 1000 new Australian temporary contact-centre jobs.
The company found itself vetting 19,000 candidates for the roles, using a purpose-built AI algorithm to analyse spoken responses to a series of questions and a “game-based cognitive test” to rank each candidate against the skills required for the position.
Highly-ranked candidates were sent directly to hiring managers while lower-scoring candidates were diverted to recruiters “for further assessment” – allowing job offers to be made within two weeks of the close of applications.
While LinkedIn’s AI tool currently offers a specific set of limited features, steadily-improving computer-vision technology is also being employed by other firms to analyse applicants’ facial expressions, measure how often they smile or blink, and evaluate their truthfulness as they answer key questions.
AI-powered tools are already being used in contact-centre environments to analyse callers’ perceived emotions, and throwing this into an increasingly powerful mix – tools like HireVue already combine these analytics to give each candidate an ‘employability score’ – could create standards-based AI recruiters that could, at this rate, ultimately be given the power to make job offers on the spot.
But is it fair?
AI’s myriad applications promise to overhaul every aspect of the recruitment industry, with a recent Gartner survey finding that 23 per cent of corporate AI adopters were using it in their HR and recruiting processes.
Key use cases predominantly included areas including talent acquisition, ‘voice of the employee’ processes that analyse social-media tools for signs of actionable problems, and HR virtual assistants to support staff and customers.
Yet even as it pushes forward, the application of AI within corporate recruitment processes faces the spectre of a high-profile failure of AI in which an Amazon-built recruitment bot was found to be deeply biased against female job applicants.
Issues of bias have become so endemic that no less than the World Economic Forum has weighed in on the problem, with pundits offering tips about how to overcome the issue.
Yet with new research from professional-services firm Genpact finding that 36 percent of Australian senior executives are already implementing AI, Australian job seekers could be particularly vulnerable to AI’s biases as companies rush to reconstitute their workforces in the wake of the pandemic.
Fully 70 percent of Australian respondents to Genpact’s 2020 AI 360 study – which surveyed 4500 senior executives, workers and consumers across Australia and three other countries – said they worry about being discriminated against by AI.
Australian consumers were more likely to push for companies to address inequalities in AI and 63 percent were more likely to recommend a company that can demonstrate its AI algorithms are free of bias, the survey found.
Yet with Australian companies investing in heavily in AI – 38 per cent of respondents said they’ve put $10m or more into AI, up 12 per cent on last year – the respondents also believe AI, properly trained, also shows promise in eliminating gender bias in areas such as recruiting (56 per cent), hiring (54 per cent), and promotion (62 per cent).
“Actively reskilling and upskilling employees will be instrumental in using AI capabilities in a meaningful and efficient way,” Genpact country manager Richard Morgan said, noting that “to mitigate AI bias, AI capabilities need to be built with a broad data set and contributions from diverse minds and perspectives.”
“Diversity and gender equality is as important to maintain as we build AI capabilities as it is in the way we build and run organisations and our society as a whole”.