The Group of Seven (G7) industrial nations have agreed to a voluntary code of conduct artificial intelligence development that warns of the risk of AI models self-replicating or being used to create biological weapons.
Early on Tuesday morning, the G7 published a joint statement agreeing to both a set of guiding principles and the code of conduct in an effort to codify international regulatory efforts between countries in the international forum.
Both documents talk in broad, conceptual terms about a need for organisations to “identify, evaluate, and mitigate risk” when it comes to AI but include specific examples that point to the potential devastating and science fiction-like doomsday scenarios of technological advancement.
The G7 – comprising leaders from Canada, France, Germany, Italy, Japan, the UK, US and EU – calls on organisations to “take appropriate measures” when developing and deploying AI “to identify, evaluate, and mitigate risks across the AI lifecycle”.
Risks include the possibility of harmful bias and discrimination, threats to democracy (like through the spread of disinformation), offensive cyber capabilities – all fairly standard fare for AI safety talk.
But the code of conduct also wants organisations to be aware of the risk of AI models “making copies of themselves or ‘self-replicating’ or training other models”.
The code further warns that AI has “chemical, biological, radiological, and nuclear risks” by potentially “[lowering] barriers to entry, including for non-state actors, for weapons development”.
It’s a tacit endorsement from international leaders of a controversial position within the technology community that AI may pose a similar existential risk to humanity as does nuclear war or the climate crisis.
Organisations have already sprung up to test how ‘aligned’ advanced AI systems are with human values.
The US-based Alignment Research Centre is one of the most well-known of these organisations thanks to its work with Anthropic and OpenAI.
When OpenAI released GPT-4 earlier this year, its technical documentation included sections on how researchers tested the language model for its ability to “create and act on long-term plans, to accrue power and resources, and to exhibit behaviour that is increasingly ‘agentic’”.
Tests included seeing if GPT-4 could be used to propagate itself of sorts by setting up an open-source language model on a new server, and finding out if the model might be useful in finding and buying “alternatives to dangerous chemical compounds”.
Each jurisdiction within the G7 will have its own ways of interpreting and enforcing AI standards, as the guiding principles recognise.
In the US, President Joe Biden announced Executive Orders on AI safety that include mandating standards and having companies that develop advanced systems prove they have adequately tested them prior to deployment.
The EU has tried to go a step further with its AI Act that attempts to restrict or ban the use of certain AI forms – like emotion recognition and predictive policing – within the economic bloc.
But European lawmakers are struggling to reach a compromise about how it will treat foundational models, possibly pushing the passage of its laws into next year.
The G7 said it is trying to balance the risks of AI systems with the benefits of advanced technology.
Organisations that sign up to its code of conduct agree to use AI “to address the world’s greatest challenges” like health, education, and the climate.
“We believe that our joint efforts through the Hiroshima AI Process will foster an open and enabling environment where safe, secure, and trustworthy AI systems are designed, developed, deployed, and used to maximise the benefits of the technology while mitigating its risks,” the G7 said in its join statement.