Scammers fleeced a large multinational out of around $40 million through an elaborate scheme that involved using AI deepfakes posing as the firm’s chief financial officer and other coworkers in a video call with an unsuspecting employee.
The scam started, as they so often do, with an email about sending money.
It was made to look like it came from the company’s chief financial officer – who is based in the UK – and was sent to an employee in the company’s Hong Kong office.
At first the employee was suspicious, Hong Kong police told reporters in a briefing late last week as reported by local news outlets, but that all changed when the staffer was invited to join a video call.
Sitting in that meeting was the CFO, coworkers, and a few externals – or so it appeared.
They convinced the employee to make 15 transactions to local bank accounts over the course of a week, totalling HK$200 million ($40 million).
It was only when the worker spoke to head office that they realised what had happened.
“I believe the fraudster downloaded videos in advance and then used artificial intelligence to add fake voices to use in the video conference,” acting senior superintendent Baron Chan said, according to RTHK news.
Chan added that the fake meeting participants did little more than introduce themselves and issue orders to the employee.
After the initial meeting, scammers kept in contact with the staffer through instant messaging and voice calls.
Hong Kong police said they wanted to bring this case to the public to make people aware that scammers are now going beyond merely faking CEO emails and text messages.
“In the past, we would assume these scams would only involve two people in one-on-one situations, but we can see from this case that fraudsters are able to use AI technology in online meetings, so people must be vigilant even in meetings with lots of participants,” Chan said per RTHK news.
He went on to suggest that people ask lots of questions and even get other meeting participants to turn their heads to help spot signs of AI.
Police declined to name the company or the employee.
Faking reality
Recent advances in generative AI have seen a rise in the number of cheap, easily accessible tools for creating images, audio, and videos of other people using small samples posted to the internet.
Last March, stories emerged of scammers using AI to trick people’s family members into sending them cryptocurrency.
By taking snippets of audio from personal YouTube videos – in one they used a clip of the duped person snowboarding – the scammers were able to make audio masks that are convincing enough to fool someone’s mother.
For the likes of celebrities and business leaders, deepfakes are even easier thanks to the wealth of content – audio, video, photos – that have been posted online and used to train AI models.
Recently, explicit deepfakes of pop star Taylor Swift were circulating freely on X (formerly Twitter), highlighting how AI is often weaponised against women.
Far from being created by some bespoke system trained for the purpose of making pornographic photos of Swift, the images reportedly came from Microsoft’s ‘Designer’ text-to-image generator.
Microsoft soon put a stop to the loopholes people were using to freely create Taylor Swift deepfakes, as reported by 404 Media.
Back in 2019, when a Chinese app called Zao made headlines for letting users swap their faces into movies and TV shows, cyber security analyst Matt Aldridge warned that deepfakes would usher in “a future of widespread distrust”.
“We may think that we’re having a video call with a close colleague or a loved one, but the other party is actually an imposter,” he said at the time.
“We need to start preparing for this now and understand how we can ensure that our communications are real and secure .”