The European Union has agreed on its landmark Artificial Intelligence Act that will outright ban the use of AI for dystopian social credit systems and biometric surveillance, while requiring big tech companies to provide greater transparency around how their models are trained.

Europe’s risk-based approach will be applied across all member states.

Under the act, AI use will be categorised as carrying either minimal, high, or unacceptable risk.

President of the European Commission, Ursula von der Leyen, said the agreement was “a historic moment” for Europe.

“The AI Act transposes European values to a new era,” she said.

“By focusing regulation on identifiable risks, today's agreement will foster responsible innovation in Europe.

“By guaranteeing the safety and fundamental rights of people and businesses, it will support the development, deployment and take-up of trustworthy AI in the EU.”

According to the European Commisison, “the vast majority of AI systems” – things like recommendation algorithms and spam filters – will be classed as minimal risk and won’t be subject to mandatory obligations.

For high-risk areas like critical infrastructure, medical devices, biometric identification, and emotion recognition, AI use will require a set of tight governance controls to mitigate risk and ensure the systems’ adequate oversight.

Certain uses of AI will be outright banned in the EU, including the use of social credit systems and predictive policing models.

Many uses of biometric surveillance like live facial recognition in public spaces will be prohibited with “narrow exceptions” for law enforcement.

Companies adopting chatbots for customer service will be required to tell people that they are interacting with a robot.

Likewise, all AI-generative content will need to be labelled.

Breaking the rules will result in large fines up to seven per cent of a company’s annual turnover.

The European Parliament first adopted a draft of its AI laws in June, at a time when governments around the world were considering how they would respond to the swift adoption of AI tools following the rise of ChatGPT.

Scrutiny for general-purpose AI

But the world-first approach to AI still required some deft political maneuvering to get over the line.

As recently as two weeks ago, the act had serious doubts hanging over it as negotiations between representatives of countries within the major economic bloc stalled.

According to Reuters, the issue hung on the strict control over powerful foundation models that can perform a range of tasks, like ChatGPT and Google Gemini.

France, Germany, and Italy reportedly wanted to see foundation models treated with a self-regulatory approach to preserve the ability for local players to compete with US tech giants.

Foundation models and general-purpose AI systems will be closely scrutinised for how their developers mitigate risk and evaluate efficacy with a view to “[ensuring] transparency along the value chain”.

The original method of classifying these higher risk general-purpose systems was based on how many enterprise customers they had, but that appears to have been adjusted.

Instead, a general purpose system will be defined by having reached a certain compute threshold in its training, measured in floating point operations (FLOPS), according to TechCrunch.

Europe’s AI Act still needs to go through its final rubber stamp stage before it is law.

Companies will then have two years to comply to the act, except for the prohibitions which will be enforced six months after the act passes.

No doubt, the AI Act will have broader ramifications for AI adoption around the world as it will define the behaviour of companies that want to reach European markets.

Australia, meanwhile, is still considering its legislative approach to AI which experts have criticised for not being fast enough to keep up with the technology’s rapid pace of development.

While the government has begun consultation on responsible AI use and signed the Bletchley Declaration on AI Safety, it has largely spent the last year taking steps to improve lagging rates of AI adoption.

Last week, Industry and Science Minister Ed Husic announced $17 million of funding for businesses and organisations to create five AI Adopt Centres to help guide small and medium enterprises on how to adopt AI.