Welcome to our three-part series on disinformation, misinformation, and malign influence (DMMI). We look at how DMMI is changing our everyday lives. Here, in part 2, we look at how generative AI is changing the game, making us question what is even real anymore.
OpenAI’s recent demonstration of GPT-4o, a talking, singing and feeling version of ChatGPT, lit up the Internet, but its promise – the integration of human-like comprehension and speech into everything from computers to robots to cars – is far more profound.
Software that behaves like humans has been exploiting victims for years, enabling AI-driven voice scams with simulated voices that scam company employees – and, as the technology improves, update ‘hi mum’ scams with emotionally exploitative AIs that contact people to beg them for money using the voice of their children, partner, or parents.
It happened to a Vietnam-based septuagenarian friend of the mother-in-law of ACS member Eric Nguyen, who recounted the woman’s nightmare after she had sent her 30-year-old son to study in Australia.
The woman got a message from his Facebook Messenger account late at night, asking her to pay for “some urgent situation,” Nguyen recalls.
“She checked it and he’s supposed to be studying here,” he said, “and it sounded right so the next day, she sent the money.”
A few days later, she received a voice message with a voice that sounded like her son, saying that he needed her help and that he would send more information – which arrived as another Facebook Messenger message with a link that she dutifully clicked.
“A few days later, she realised that her bank account was clear,” Nguyen said, with her life savings of “over a million dollars” simply… gone.
Creating the tools for fraud
Such stories represent the dark side of ever-improving AI technology – and as OpenAI’s demonstrations of GPT-4o showed, its ever more human-like design means you should anticipate regular contact from emotionally exploitative generative AI (genAI) engines bent on extracting money from you.
Worse still, you may find yourself conversing with what seems to be a convincing facsimile of a loved one: with tools like Amazon Alexa and OpenAI’s Voice Engine technology already able to clone someone’s voice with just 15 seconds of spoken source material – and Microsoft said to have trimmed this to just 3 seconds – cyber criminals can build a serviceable copy of your voice by doing nothing more than calling your phone and recording your voicemail greeting.
Adding video to the mix, as with OpenAI’s frighteningly good Sora engine, Runway’s new Gen-3 Alpha, and other text-to-video AI tools, makes the deception even more effective – as one employee learned the hard way after being tricked into wiring $40 million after participating in a video call filled with AI-powered video facsimiles of coworkers.
Even as the spread of DMMI content challenges long-held notions of trust, genAI threatens to create other problems by facilitating widespread scamming with targeted, personalised content – which is, among other things, less obvious than ever because genAI is enabling foreign scammers to communicate with victims in letter-perfect English.
Throw in the emotional voice capabilities of GPT-4o and its ilk – promising to help criminals emulate a target’s voice to penetrate biometric security systems – and you have a recipe for a scam disaster that is likely to unfold as genAI becomes ubiquitous throughout the rest of this year.
“What I’m worried about is not more of the same, but the new stuff that may only be possible with AI,” OpenAI co-founder and CEO Sam Altman told a recent Brookings Institute webinar in which he alluded to “this sophisticated one-on-one persuasion that you just couldn’t do before.”
Indeed, last year a Microsoft survey found that 71 per cent of respondents were worried about the risks of AI scams – pipping concerns about deepfakes (69 per cent), online abuse (69 per cent), and AI hallucinations (66 per cent).
A more recent TransUnion report warned that synthetic identity fraud had grown 14 per cent year on year, with $4.68 billion ($US3.1 billion) left exposed to synthetic fraud attacks and Deloitte’s Center for Financial Services predicting this could grow to $34.71 billion ($US23 billion) in losses by 2030.
Treat your senses with caution
AI-driven voice cloning tools are the latest in a panoply of technological challenges that threaten to distort truth and muddy accountability online – flooding the Internet with deepfakes of actors, musicians, politicians and other celebrities to elevate disinformation, misinformation, and malign influence (DMMI) campaigns to a whole new level.
In an election year – when promotion of propaganda, circulation of misinformation and malicious manipulation of the truth is more dangerous and more likely than ever – DMMI is becoming indistinguishable from fact as propagandists utilise genAI tools flood the Internet with realistic written content, photorealistic images, video, and even virtual copies of actual people.
It’s still early days, but estimates suggest that 10 per cent of Internet content is already generated by AI, with with Europol concerned and experts warning – in a widely quoted forecast whose provenance remains contentious – that this could rise to 90 per cent of online content by 2026.
Even direct human interaction will require scepticism: as you will see if you close your eyes while talking with GPT-4o, the technology has already crossed the proverbial ‘uncanny valley’ – a term used to describe the point where an object becomes so human-like that it cannot be discerned from actual humans.
The implications of this are already being seen, with the eSafety Commissioner bringing forward a planned review of genAI policy in a tacit admission that the technology is moving too quickly to regulate.
Surveys have repeatedly confirmed that we are collectively defenceless against the synthetic truth – interspersed with DMMI content whose seeming reality is easy to confuse for truth – that AI is feeding us.
Earlier this year, for example, a University of Waterloo study of 260 participants found that just 61 per cent of respondents could tell whether images they were given showed real people or were created by genAI.
Tellingly, those respondents were given time to scrutinise the images to look for telltale markers of fraud; for people rapidly doomscrolling their social feeds while waiting for a latte or a train, lead researcher and PhD candidate Andreea Pocol noted, the illusions spun by genAI are likely to be accepted as truth by people who “won’t pick up on these cues” and have become conditioned to trust both real and fabricated content equally.
“People are not as adept at making the distinction as they think they are,” Pocol said, warning that “the tools of disinformation have been constantly shifting and evolving.”
“It may get to a point where people, no matter how trained they will be, will still struggle to differentiate real images from fakes.”
It’s not even a new phenomenon: seven years ago, long before genAI was even a thing, a Pegasystems survey found that most people who use AI-powered services or devices didn’t even know they were interacting with an AI-based system – and that was back when the relatively low quality of AI content made it much easier to spot than in today’s systems.
The Internet is a DMMI echo chamber
Enthusiasm over AI’s possibilities has seen governments and tech giants funnelling money into its development at an unprecedented pace, with each stakeholder pursuing slightly different agendas in their quest for verisimilitude.
Meta CEO Mark Zuckerberg, for one, wants to build human-level AI and populate the metaverse with it; Elon Musk’s establishment of an AI venture, called xAI, aims to build a “maximally curious AI” and recently secured $9 billion in venture capital funding to make it happen; while consumer interaction with information and services could be reshaped by Apple’s looming addition of OpenAI’s GPT capabilities to its Siri assistant, which is used by more than two billion users of iPhones, iPads, Macs, and other devices worldwide.
Indeed, the modularisation of genAI technology – new versions have been optimised to run on devices as simple as smartphones – means that it will soon be integrated into every device and service that you use.
This ubiquity, in turn, is poised to rapidly and irrevocably change society’s understanding of what ‘truth’ actually means – and our ability to distinguish AI from non-AI generated content is likely to be an early casualty.
It’s not that we don’t want to maintain scepticism about AI: the latest KPMG-University of Queensland study of trust in AI found that 61 per cent are wary about trusting AI systems – but that 73 per cent trust genAI content despite that wariness.
In other words, most people are worried about problematic AI-generated content, but think they are smart enough to detect it and won’t get duped.
Yet that truth is far less easy to detect when it is buried amidst a flood of DMMI content that is often indistinguishable from legitimate verifiable information – and with half of respondents to that study admitting they don’t understand AI, or when and how it’s used, most people simply aren’t intellectually equipped to do so.
This leaves the average person deeply susceptible to often subtle manipulation – images of US President Joe Biden in a hospital bed, for example, seemed legitimate to casual observers because he is perceived as old and frail – and subtle hacking over time can subtly shift the goalposts, seeding false and biased information that becomes truth with enough repetitions.
Altman remains optimistic that there is still time for regulations, and human intellect, to stay ahead of the flood.
“We have not seen the predicted flood of misinformation yet,” he said, arguing that “the platforms that distribute content online have been taking [DMMI] seriously for quite some time now… and I think we’ve built up technological defences and social antibodies.”
The strength of those defences will be tested, however, as the technology continues to evolve at breakneck pace.
Addendum: Addition made on 2/7/2024 to indicate the Europol report is contentious.
Read more:
Part 1: What is true anymore?