The European Union has opened the door for western regulation of artificial intelligence after its parliament adopted a draft of the Artificial Intelligence (AI) Act last week.
Europe’s risk-based approach to AI is expected to set the tone for international regulatory approaches, including in Australia.
Under the draft laws, which will now enter a negotiating phase with EU member states, certain AI uses will be prohibited in the EU.
For the most part these prohibitions focus on the use of AI for public biometric identification, like facial recognition, while also outlawing controversial predictive policing practices and emotion-recognition AI.
Italian co-rapporteur on the European Parliament’s AI Committee Brando Benifei said “all eyes are on [Europe] today” following the AI Act’s movement into the next phase of the European Union’s legislative process.
“While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose,” Benifei said.
“We want AI’s positive potential for creativity and productivity to be harnessed but we will also fight to protect our position and counter dangers to our democracies and freedoms during the negotiations with [the Council of the European Union].”
The EU’s draft law would also see a strong set of requirements for AI uses it deems ‘high risk’. These include the management of critical infrastructure, education and training, employment services, migration, law enforcement, and recommender systems used in social media algorithms.
All AI products deemed ‘high risk’ will need to demonstrate robust risk mitigation systems, accurately document the product’s capabilities and limitations, and be error-resistant – among other requirements.
Tech giants already breaking the rules
Researchers from Stanford University’s Centre for Research on Foundation Models has examined the extent to which existing general use AI systems – like OpenAI’s GPT-4 and Stability AI’s Stable Diffusion v2 – would comply with the EU’s AI Act if it were enforced today.
Foundation models are large AI systems trained on massive amounts of data and adapted to a range of applications.
They extracted 12 requirements from the AI Act and found that, for the most part, foundation models were not compliant with the new EU rules.
Importantly, the Stanford team also noted that it “is currently feasible for foundation model providers to comply with the AI Act” and that the proposed regulation “would improve transparency in the entire ecosystem”.
The identified AI Act requirements fall into four categories: data, compute, model, and deployment.
Compute is about disclosing the amount of computer power used to train the model along with measurements of energy consumption when building AI tools.
The model category looks at benchmarking and risk mitigation requirements.
Deployment seeks to control how most people will experience AI, mandating that machine-generated content – like images, video, and audio – are properly watermarked.
The data covers the source of a model’s training data, how it is governed – including steps taken to mitigate bias – and summaries of copyrighted data included in training sets.
This last point is particularly pertinent given the lawsuits that copyright holders have launched against the likes of Microsoft, OpenAI, and Stability AI for generative AI they released that included unattributed code, paintings, and photographs scraped from the internet.
Copyright, along with being a very live issue, is an area the researchers found companies consistently failed with only two of the 10 models tested scoring above zero.
“Few providers disclose any information about the copyright status of training data,” they said.
“Many foundation models are trained on data that is curated from the Internet, of which a sizable fraction is likely copyrighted.”
Reporting on energy consumption and risk mitigation systems are also areas in which the Stanford team found AI companies are currently struggling.
The Stanford researchers note that global policymakers should prioritise transparency.
“Our work shows transparency is uneven at present, and an area where the EU AI Act will bring clear change that policy elsewhere should match,” they said.
“The history of social media regulation provides clear lessons for policymakers – failing to ensure sufficient platform transparency led to many of the harms of social media; we should not reproduce these failures for the next transformational technology in foundation models.”