Australia has joined 28 other countries and governments in signing the Bletchley Declaration on AI Safety in the UK as governments around the world continue to flounder while a new technological power emerges.
Science and Technology Minister Ed Husic was at historic Bletchley Park, where Alan Turing and his team cracked Nazi codes, for the UK AI Safety Summit last week where an international cohort agreed to broad approach to the issue of how to tackle AI.
The declaration itself speaks of the potential benefits and harms of AI.
Its signatories recognise that the technology is “already deployed across many domains of daily life” and encourage a “pro-innovation and proportionate governance approach” to regulation.
Among the signatories were China, the European Union, and the US, which have all taken steps to curb the potential harms of generative AI and foundation models in ways that reflect their national interests.
China drafted requirements that AI services may not be used to generate content that subverts state power or incites separatism; the EU has its risk-based approach that seeks to outright ban certain uses of AI and impose strict requirements on uses deemed ‘high risk’; and US President Joe Biden just signed an Executive Order demanding companies share their safety testing with the government when creating potentially dangerous AI models.
The UK, like Australia, is still consulting on what, if anything, it plans to do to mitigate against the potential for AI to flood our feeds with disinformation, make harmful decisions against vulnerable people, and discriminate at scale.
What the Bletchley Declaration does is reaffirm, as the Group of Seven did a week earlier, a broad set of statements around “identifying AI safety risks” and “building respective risk-based policies” within each jurisdiction.
Australia has had a national set of AI Ethics Principles since 2019.
Speaking with the ABC while he was in London, Husic refused to give a timeline for how his government would further take on AI regulation, nor did he clarify if the government intended to adopt an EU-style risk-based system or one that focuses on much-needed modernisation of existing legislation.
“We’ve had broad agreement that voluntary commitments won’t be enough,” Husic said in reference to the latest consultation period his department began back in June.
“We do need to get the balance right on this. At its best, AI can help improve the quality of life and improve the way our economy functions. At its worst, it can distort and divide.”
Husic also told a press conference that he was “not worried about robots taking over” so much as he was “worried about the prospect that AI-generated disinformation might”.
Husic’s main concern with the rise of generative AI appears to be mis- and disinformation, an issue his government is already tackling to some extent with its proposal to give the Australian Communications and Media Authority (ACMA) more powers to encourage tech giants to limit the spread of deliberately misleading information on social media.
Companies are likewise aware of the need to include a way of identifying the images and videos their products create and have started implementing watermarks to help spot AI fakes, though their efficacy is very much in doubt.
On the sidelines of the AI Summit, Husic also signed a statement of cooperation with the UK about developing quantum technologies.
The statement commits the countries to convene regular meetings about quantum technologies and to “elevate” the technology in bilateral agreements.
“Elevating this cooperation across quantum scientific research, technology, and innovation promises to deepen our bonds of friendship and understanding, strengthen our economies, and contribute to global science and technology knowledge,” the statement said.