Australia could potentially join a US-UK led partnership in which artificial intelligence (AI) safety institutes will monitor and test the safety of new AI models, the two countries have announced even as Google, OpenAI, and Apple debut powerful new AI engines.

The new memorandum of understanding (MoU) unites the UK’s AI Safety Institute (UKAISI), which was launched last November as a global hub for innovation and safety, with the US AI Safety Institute (USAISI) – which was announced in February and will, along with the industry-based AI Safety Institute Consortium (AISIC), operate under the auspices of the National Institute of Standards and Technology (NIST).

Both safety institutes will unite researchers and industry experts to, as the UK Government put it, “carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models.”

Based on the principle of ensuring that “AI developers are not marking their own homework when it comes to safety,” the model facilitates the creation of partnerships, with AISI researchers supported with supercomputing capabilities and access to the EU’s $578 million (£300 million) AI Research Resource.

AISI is already working closely with the Alan Turing Institute – the UK’s national data science and AI body – as well as engaging with international bodies including the Government of Singapore and, with the latest MoU now taking immediate effect, USAISI.

The two organisations “have also committed to develop similar partnerships with other countries” – yet how and whether Australia participates in the consortium may well depend on how aggressively the government moves to establish a similar AI safety framework in this country, where one recent survey found 80 per cent of Australians are concerned about controlling AI’s potential catastrophic risks.

Amidst exhortations by Australia’s Productivity Commission to avoid heavy-handed AI regulations, a recent decision by the eSafety Commissioner to bring forward a review of Australia’s Online Safety Act highlighted concerns that local legislation isn’t keeping up with the technology – which is also threatening the trust of voters even as the government pivots towards a “risk-based” regulatory framework.

Yet Australian regulations may still be years away, leaving local regulators looking on as the EU passes its landmark AI Act – which will introduce a stepped program of obligations around use of AI in biometrics, critical infrastructure, education, employment, immigration, and other areas – and the UKAISI-USAISI partnership fights to establish itself as an alternative axis of global AI regulation.

Putting the heat on AI regulators

Governments worldwide face the need to move both quickly and confidently to regulate for the safety of AI systems, which are continuing to progress in leaps and bounds – a reality that was flagged as the UKAISI’s founders set out six months ago “to prevent surprise to the UK and humanity from rapid and unexpected advances in AI.”

Even then, UK regulators were warning about “new powerful models expected to be released [in 2024] whose capabilities may not be fully understood” – yet despite promising to “quickly put in place the processes and systems to test them before they launch”, the body seems to have missed the opportunity to vet newly released AI tools that blur the boundary between reality and deception.

OpenAI, for its part, has this year not only debuted its Sora text-to-video tool but recently showcased AI-based voice cloning tools that can convincingly mimic the voice of anybody from just 15 seconds of source audio.

Google – which recently walked back its Gemini image generator after concerns about its imposing arbitrary diversity standards – has also turned heads after teasing Google Vlogger, an AI video generation tool capable of animating a still photo to simulate a video of someone speaking.

Even Apple is refining its genAI game, with all eyes on the company’s June WorldWide Developers Conference (WWDC) and its expected debut of a slew of AI features that could democratise risky AI by putting it into the hands of anybody with an Apple device.

Last month, the company teased how AI might change the user experience by publishing a research paper showcasing how its new large language model (LLM) delivers “large improvements” in AI capabilities – with even its smallest, presumably iPhone focused LLM “achieving performance comparable to that of [industry benchmark] GPT-4 and larger models “substantially outperforming it.”

Ensuring safety amidst such breakneck innovation is no small task – but US Secretary of Commerce Gina Raimondo is confident that transnational partnerships will provide the foundation to make it achievable.

“AI is the defining technology of our generation,” she said, noting that the partnership “is going to accelerate both of our institutes’ work across the full spectrum of risks, whether to our national security or to our broader society.”

“Our institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance” about managing the risks posed by new AI technologies.

“We aren’t running away from these concerns,” Raimondo said.

“We’re running at them.”