Developers of core AI models will be forced to share safety testing results after US President Joe Biden signed a sweeping executive order designed to address national security, data privacy, public safety, and other potential risks posed by the fast-evolving technology.

The newly issued Executive Order on Safety, Secure, and Trustworthy Artificial Intelligence (SSTAI) – which marks what the White House has called “the most significant actions ever taken by any government to advance the field of AI safety” – includes a range of actions addressing AI safety and security, privacy protection, advancement of equity and civil rights, responsible use of AI for consumer applications, support for workers, promotion of innovation and competition, and policies to ensure responsible government deployment of AI.

AI companies must notify the US government when training “any foundation model that poses a serious risk to national security, national economic security, or national public health and safety,” with those companies ordered to share the outcomes – as GPT-4 developer OpenAI has done – of all red-team safety testing to demonstrate the safety of their platforms.

Ted-team safety testing emulates real-word adversaries, such as nation states, to uncover any blind spots or risks in the system.

“Rigorous” standards for red-team testing will be developed by the National Institute of Standards and Technology (NIST) and applied to critical infrastructure sectors by the US Department of Homeland Security (DHS) – which will establish an AI Safety and Security Board as DHS works with the US Department of Energy to evaluate new AI models’ threats to critical infrastructure and their potential to create chemical, biological, radiological, nuclear, and cyber security risks.

The new order also mandates the development of standards for screening the use of AI in biological synthesis – with federal life-sciences research funding to be tied to compliance with those standards – as well as an “advanced cyber security program” for developing AI tools to find and fix critical software vulnerabilities.

Reflecting growing concern about the pervasiveness of AI-generated ‘deepfakes’, the order will drive development of standards and best practices to enable the identification of AI-generated content through watermarks and other methods – with government agencies expected to adopt the tools for their own communications.

“Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic,” the order says, “and set an example for the private sector and governments around the world.”

Even as NIST and the US government agencies work to develop formal standards around responsible AI, the order also mandates the creation of standards for strengthening privacy protections – such as developing techniques that allow AI large language models (LLMs) to be trained on large volumes of training data while preserving the privacy of the individuals to whom the data pertains.

Creation of a new Research Coordination Network will drive innovation in this area, working in lockstep with the National Science Foundation to promote the use of such tools across US government agencies – as well as reviewing and guiding the way that agencies buy information from data brokers.

Addressing the social harms of AI

The publication of clear initiatives and a binding framework for AI development has been broadly welcomed across the industry, although some AI startups fear the tight controls could stifle innovation and they are unlikely to constrain the activity of malicious actors that are tapping AI for platforms such as WormGPT.

“While I applaud the government’s desire to ensure AI is safe, it’s also imperative that regulation is balanced with the speed of innovation,” noted Arctic Wolf chief product officer Dan Schiappa in response to the new executive order.

“If we slow down AI innovation significantly, foreign companies could innovate faster than us and we risk falling behind in the AI race. While these rules are necessary, they may only keep well-intentioned people in check and will ultimately have no impact on threat actors as they will not follow these rules.”

Yet for the major companies stepping up to support the White House’s efforts – 15 companies, including IBM, Salesforce, Nvidia and others recently joined Microsoft, Meta and OpenAI by signing up to its responsible AI initiatives – the executive order is also a way to demonstrate their commitment to broader social values as it addresses issues stemming from the unconstrained use of AI in workplaces.

The risk of “algorithmic discrimination”, for example – in which LLMs apply bias in their decision-making because they are trained on biased data sets – will be outlined through “clear guidance” to landlords, federal contractors, and federal benefits programs “to keep AI algorithms from being used to exacerbate discrimination”.

Justice agencies will be tasked with developing best practices for investigating and prosecuting civil rights violations related to AI, as well as in developing preventive standards to temper the use of AI in sentencing, parole, pretrial release and detention, crime forecasting, forensic analysis and other practices.

Biden also ordered a report into AI’s potential labour market impacts, as well as mandating the development of principles and best practices to “mitigate the harms and maximise the benefits of AI for workers” by addressing concerns that AI’s introduction into workplaces can create job displacement; violate labour standards; compromise workplace equity, health, and safety; and drive data collection.

The long-awaited Executive Order – which also includes a call for international collaboration around standards for responsible AI – comes as the US government joined other G7 leaders in the release of formal guidelines and an international code of conduct under the Hiroshima AI Process.

Such guidelines will provide clarity for the Australian government, which has driven responsible AI policy development through the CSIRO and previously contemplated banning ‘high-risk’ AI as it wrestles with the balance between government oversight and business mandates.