As the EU gets close to finalising its artificial intelligence regulations, experts are warning that Silicon Valley is trying to water down rules that will aim at protecting human rights.
In an open letter published this week, academics and researchers from around the world urged the EU Commission to be wary of self-interested entities trying to control the conversation around AI.
“The Commission will undoubtedly receive detailed feedback from many corporations, industry groups, and think tanks representing their own and others’ interests, which in some cases involve weakening regulation and downplaying potential risks related to AI,” the letter says.
“We hope that the Commission will stand firm in doing neither.
“Moreover, as experts who have been involved for years or decades in developing the core technologies, we would like to emphasise one central point: that while it is difficult to forecast exactly how or how fast technological progress will occur, it is easy to predict that it will occur.”
This week, the European Union closed the consultation period for its AI white paper and will begin sifting through the responses as it looks to construct a framework governing the future of artificial intelligence in the EU.
In its submission to the EU, Google warned that AI regulation “must be flexible in nature” to ensure that it “it can accommodate rather than discourage future innovation”.
And Microsoft urged the EU to “leave space for positive uses of AI” in order to make sure that regulatory compliance “does not prevent products and services from reaching the market”.
But Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales, said Europe has already disproved notions about how digital technology can be regulated.
“There was a time when people thought you couldn’t and shouldn’t regulate the digital space,” he told Information Age.
“They thought it couldn’t be done because it’s physical and not restricted to national boundaries. There was also an idea that you simply shouldn’t regulate because it would stifle innovation.
“But GDPR is a fine example of how you can regulate the space, and the Commission has said it wants to move fast on regulating AI next.”
Tech companies have tried to get on the front foot when it comes to government regulation.
Recently, IBM, Microsoft, and Amazon decided they would not allow police departments to use their facial recognition software in the wake of the Black Lives Matter protests.
While he referred to it as a “significant move” from the tech giants, Professor Walsh was skeptical about the motivations and public relations outcomes behind these decisions from tech companies.
“You have to be cynical,” he said. “We’ve seen them get vast wealth in these unregulated markets.
"But at least the conversation is going in the right direction. The fact we’re talking about this right now, at the same time the tech companies are realising it's toxic, is a positive sign."
Professor Walsh joins Professor Genevieve Bell, Professor Enrico Coiera, Professor Elanor Huntington, and Dr Paul Dalby as Australia’s first nominations for the Global Partnership on Artificial Intelligence (GPAI) – an international taskforce looking at the responsible development of AI.
Australia joins Canada, the EU, Germany, France, India, Italy, Japan, New Zealand, South Korea, Singapore, Slovenia, the UK, and the US as a founding member of GPAI.