If you’re working to fill that crucial developer role and are thinking that sterling candidate on your short list seems too good to be true, proceed cautiously: new figures suggest that Australians are leading the world in using AI to cheat during job interviews.
Nearly 73 per cent of Australians are using AI tools in their current job search – well ahead of the 58 per cent global average – and 90 per cent of those admit using it to “exaggerate or lie about their skills,” marketplace vendor Capterra’s survey of 3,000 job seekers has found.
That includes not only editing tasks such as using AI to improve the wording of their CV or cover letter – with 26 per cent saying the technology helps them appear as a better job candidate – but helping candidates apply to more jobs automatically.
Concerningly, 27 per cent of Australian job seekers admitted using AI to complete a test assignment or skills assessment given them as part of the interview process – with 22 per cent using AI to generate answers for interview questions.
GenAI tools come bundled
Tools like Verve AI and Final Round’s Interview Copilot have formalised the process, bundling AI resume builders and mock interview tools with a genAI-driven coach that listens during the interview and suggests answers, writes code samples, and evaluates replies.
Although AI might help job seekers shoehorn themselves into a job they’re unsuited for, they risk creating significant problems for their new employers – with widely-cited CareerBuilder figures pegging the cost of a bad hire at $22,700 ($US14,900).
That number, which predates the popularisation of genAI, accompanied a warning that 74 per cent of 2,257 surveyed HR professionals said they had once hired the wrong person – a risk that increases when job seekers misrepresent themselves with AI.
More recent figures suggest the cost of a bad hire can reach 30 per cent of their first-year salary – and that bringing the wrong person into a team can cut productivity by 39 per cent and drives 80 per cent of employee turnover.
“If businesses don’t get on top of job seeker AI deception soon,” Capterra warns, “even more costly bad hires will follow.”
Setting the terms for AI – on both sides
The challenge facing employers is that using increasingly capable genAI tools to glow up a job application actually does work: Capterra’s survey found that job seekers using AI do complete more applications and are “far more likely” to get a job offer.
A recent Canadian study of 245 interviewees found the same: interviewees who use ChatGPT during mock video interviews received “considerably higher scores” on overall performance and content than those that didn’t use the genAI tool.
“AI chatbots can consistently provide task-relevant and accurate responses with little-to-no effort or skill on the part of the user,” the study’s authors note, adding that AI cheating “could improve the content of interviewees’ responses.”
Yet the participants who used ChatGPT – even those who edited ChatGPT’s responses rather than submitting them verbatim – also received lower honesty and procedural justice ratings, suggesting that employers can spot cheaters by being attentive to detail.
Greg Yang, a cofounder of Elon Musk-backed xAI, recently said he “wasn’t even mad, just very curious” after catching a job applicant using Anthropic’s Claude AI tool during an interview.
“We had a nice chat at the end where they taught me all the tricks.”
Warning issued
Although AI giants like OpenAI have been reluctant to release tools to automatically spot AI-generated content, companies can take a few steps to vet potential hires – for example, avoiding phone interviews and using in-person or video interviews instead.
Employers should make sure their job application page is clear that using AI to misrepresent credentials or skills will see applications immediately thrown out, Capterra advises, and take the time to actually conduct reference checks to validate employees’ claims about their skills.
There are other telltale signs that an applicant is using genAI – for example, that they are able to solve technical problems instantly, or that they provide a solid answer to a question but can’t elaborate on it or adapt it to the specific company’s circumstances.
“We don’t disqualify anyone for using AI,” one employer shared in an online discussion about AI cheating tests.
“We [have] disqualified them because of their dishonesty.
“If you can’t trust someone in an interview, how can you trust them in a remote environment?”
Although genAI is steadily being integrated into HR functions, a recent Gartner survey found many companies winding back plans to use it for candidate-facing chatbots, compiling interview questions, and generating job descriptions or skills data.
Many job seekers “are unsure about the boundaries between legitimate assistance and cheating in the hiring process,” notes Robert Half senior division director Josh Burke, warning that “candidates need to understand what constitutes responsible use.”
AI, he said, “is a great tool for streamlining your job search [but] as soon as it replaces your input, originality, or honesty, it becomes a likely case of cheating.”