Rampant misuse of AI voice generation is stirring fear and speculation across multiple industries, as the rapidly developing technology spurs increasing cases of imitation, political deepfakes and security disruption.
Five years on from the now-infamous PSA clip showing a deepfake of US president Barack Obama forewarning the dangers of misinformation due to burgeoning artificial intelligence technologies, AI technology has vastly improved at producing fraudulent images, voice and video content – and is widely accessible to anybody with a computer and a modest budget.
This year has seen a widespread adoption of AI voice generation, a type of artificial intelligence used to create synthesised voices which sound like natural human speech.
"Voice synthesis is not new – think Wavenet and most recently Vall-E, Deep Voice – but what has changed is the access to the technology and the ease of use," said Nishan Mills, principle architect at Centre for Data Analytics and Cognition, La Trobe University.
"We see more widespread applications by lay users," said Mills.
One of the largest social media trends this month, particularly on TikTok, has been AI-generated clips of prominent politicians such as US president Joe Biden and Donald Trump sharing uncharacteristic announcements on video games and pop culture.
The advent of public-facing AI tools has given way to countless mock clips of public figures in dubious circumstances – whether it’s an AI-Biden signing an executive order on the brilliance of Minecraft, or Pope Francis sporting a fashionable Balenciaga jacket.
And while the "meme" culture surrounding generative AI can be thanked for hours of laughter-inducing content, the technology has already been adopted for far more nefarious uses.
Last month, images generated on AI program Midjourney fooled countless Twitter users into thinking Donald Trump had been arrested, and right-wing commentator Jack Posobiec aired a fairly convincing false video of Biden declaring the return of the US military draft in preparation for war.
In a meeting with science and technology advisers, Biden said it remains to be seen whether artificial intelligence is dangerous, but urged tech companies to proceed responsibly.
"Tech companies have a responsibility, in my view, to make sure their products are safe before making them public," said Biden.
The US president also said social media has already illustrated the harm which powerful technologies can do without the right safeguards.
AI music goes viral
Experts have long anticipated the misinformation risks which AI-generated content could pose in politics and media, but perhaps less expected is the technology's recent impact in other industries such as music.
This week, a song featuring AI-generated mock vocals of musicians Drake and The Weeknd went viral on streaming services, ringing serious alarm bells across the music industry.
Titled "Heart on My Sleeve", the fake Drake track was initially shared on TikTok by an anonymous user called Ghostwriter977 before being uploaded to streaming services.
The track generated more than 600,000 plays on Spotify and millions on TikTok before being pulled down by Universal Music Group (UMG) over copyright infringement.
While it remains unclear whether the instrumental of the track was produced by AI, "Heart on My Sleeve" contained entirely AI-synthesised vocals of Spotify's most-streamed artist Drake and pop singer The Weeknd, complete with lyrics, rhymes and on-time flow.
UMG told Billboard magazine the viral AI postings "demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists.”
"The training of generative AI using our artists’ music (which represents both a breach of our agreements and a violation of copyright law) as well as the availability of infringing content created with generative AI on DSPs, begs the question as to which side of history all stakeholders in the music ecosystem want to be on," said a UMG spokesperson.
"The side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation,” they added.
Popular music critic Shawn Cee warned listeners AI-generated music may be advancing faster than regulation can keep up.
"We're in the stage of machine learning where it's learning faster than it's being regulated," said Cee.
"It 100% can go up on Spotify… be there for one or two days probably, and the internet goes crazy over it.
"I think it's incredibly weird and creepy to have your image or your likeness used in situations or scenarios that you never consented to," he said.
AI voices used to bypass Centrelink systems
In March, Guardian Australia journalist Nick Evershed said he was able to access his own Centrelink self-service account using an AI-generated version of his voice – effectively highlighting a serious security flaw in the voice identification system.
Amid growing concerns over AI's threat to voice-authentication systems, Evershed's investigation suggested a clone of his own voice, in combination with his customer reference number, was enough to gain access to his Centrelink self-service account.
Both Centrelink and the Australian Taxation Office (ATO) facilitate the use of "voiceprints" as an authentication measure for callers trying to gain access to their sensitive account information over the phone.
While the ATO suggests its voice authentication systems are sophisticated enough to analyse "up to 120 characteristics in your voice", increased reports of AI-cloned voices bypassing voice authentication systems in banks and other systems have led security experts to call for change.
"Voice cloning, a relatively new technology using machine learning, is offered by a number of apps and websites either free or for a small fee, and a voice model can be created with only a handful of recordings of a person," said Frith Tweedie, principal consultant at privacy solutions consultancy Simply Privacy.
"These systems need to be thoroughly tested prior to deployment and regularly monitored to pick up issues.
"But it's hard to keep up with innovative fraudsters with ready access to these kinds of voice cloning tools. Which begs the question as to whether they should even be released in the first place," she added.
Australia does not currently have a specific law regulating artificial intelligence.