Australian banks shouldn’t expect new regulations to kerb surging AI-powered fraud, officials told an industry event just days after a “very nervous” OpenAI CEO Sam Altman warned that AI is already so good at mimicking people that it’s rapidly pushing banks into a “fraud crisis.”
AI-based voice cloning technology has become so effective that continuing to verify bank customers by phone is “a crazy thing to still be doing,” Altman said during a recent interview at the US Federal Reserve in Washington, DC, where OpenAI is opening a new office.
“AI has fully defeated most of the ways that people authenticate currently, other than passwords,” he said, noting that AI can successfully stymie new customer verification technologies like voice prints and the ‘selfie ID’ already used by numerous Australian banks.
“We have tried to warn people,” Altman said, noting that just because OpenAI hasn’t publicly released the technology to bypass such methods doesn’t mean it doesn’t exist – and that, inevitably, “some bad actor is going to release it.”
“Right now it’s a voice call, soon it’s going to be a video FaceTime,” he said, adding that “it’ll be indistinguishable from reality.”
“This is not a super difficult thing to do [and it’s] coming very, very soon,” he warned, calling for banks to develop new methods of verifying their customers – a partnership that US Fed vice chair for supervision Michelle Bowman said, “might be something we can think about.”
Ease of use is a blessing and a curse
The warnings from Altman – who has previously said AI’s misinformation capabilities make him “nervous” and called government regulation a “quite wise” idea – reflect the growing challenge for banks that face new security challenges after rushing to deliver services online.
As easy as it is for Altman to advocate change, banks are large ships to turn – the NAB only last month appointed a group executive to manage its digital and AI programs – and most have only started tapping their data to detect fraud and block scam payments, favouring biometric technologies for their perceived security and user friendliness.
Mastercard, for one, is shifting to a passkey-based authentication system that will verify customers by scanning their face – reflecting an industrywide push towards passkeys that address the risks of password compromise.
Biometric security hasn’t deterred criminals – with NordVPN, for one, recently observing scammers opening accounts, renting properties, and booking trips using identity details created using AI deepfakes and by tricking victims into providing verification ‘selfies’.
Criminal group Scattered Spider is alleged to have used voice deepfakes for the recent Qantas data breach, using social engineering prowess that it has refined during attacks such as the frighteningly easy 2023 breach of consumer brand Clorox.
Identity verification service AU10TIX recently warned of a new fraud technique called ‘repeaters’ in which criminals use multiple, slightly different deepfakes to test targets’ defences – creating synthetic identities that are “indistinguishable from valid IDs” and then used for “coordinated mega-attacks”.
Even as banks fight such manipulations, AI voice deepfakes are also being used for good, as in the CBA’s recently announced army of Australian-sounding AI bots that are being tasked with conducting long but meaningless conversations to waste scammers’ time.
“Banks are some of the most sophisticated operators in Australian business,” ASIC chair Joe Longo told the recent Australian Banking Association (ABA) Banking Conference, “and if anyone has the systems, technology, and processes to lead Australia’s AI revolution, it’s you.”
“You have been using machine-learning for years and already have some of the necessary governance structures in place,” he added, hailing the “boldness” of banks whose teams of data scientists are already using AI “to solve genuine customer problems.”
Fight the AI crims with innovation, not regulation: regulators
Despite years of industry and government rhetoric about the importance of AI ‘guardrails’, regulators made it clear that banks will have to rely not on regulatory intervention but on their own innovation to keep their customers safe.
Amidst the “undeniable” opportunities of AI, Longo said, “with every new opportunity comes new risk…. [and] because our existing laws are technology-neutral, some guard rails already exist.”
Adding new regulations “becomes a burden borne by everyone [and] a handbrake on innovation,” he said, warning that new rules create a “mosaic of complexity…. when you have got to spend more on compliance lawyers than coders.”
Yet Longo’s stance is more than philosophical: admitting that AI innovation had already outpaced regulators’ ability to keep up, he said banks would need to figure out how to secure digital banking while regulators spitball new ways to apply existing laws.
“We have yet to create the regulatory architecture, language, or strategy that will enable us to regulate confidently and in real time a group of technologies that is evolving as rapidly as we’re seeing,” Longo said, urging regulators to be “bolder and more imaginative.”
“It’s already apparent that AI could present challenges for regulators in data and information gathering,” he said, advising that “we’ve got to kick the tyres and see just how far our existing tech-neutral framework will flex.”