Social media giant Meta said it will begin labelling AI-generated images from other companies across its platforms in a bid to stop the flood of AI content.

In a blog post this week, Meta’s president of global affairs Nick Clegg outlined the company’s plans to integrate AI-related metadata standards into Facebook, Instagram, and Threads.

While Meta has already been labelling content created by its own AI applications, its social platforms have lacked a way of telling users when images from other companies like OpenAI, Google, Microsoft, and Midjourney appear in their feeds.

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” Clegg said.

“People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology.

“So it’s important that we help people know when photorealistic content they’re seeing has been created using AI.”

The automatic identification tool comprises two technical standards: the IPTC Photo Metadata Standard and AI-specific aspects of the C2PA standard for authenticating content provenance.

For now, Meta said the ability to label third-party AI content only applies to images as the industry is still working on similar watermarking tools for video and audio.

“This approach represents the cutting edge of what’s technically possible right now,” Clegg said.

“But it’s not yet possible to identify all AI-generated content, and there are ways that people can strip out invisible markers.”

Because the metadata standards are opt-in, Meta’s labelling efforts won’t necessarily pick up images created by bespoke models that are designed to avoid detection.

As such, this kind of system – which will likely find its way across the broader internet – will be ineffective against AI images created by bad actors with the resources to train or buy dedicated non-watermarked image generators, like nation states wanting to spread disinformation or criminals and scammers.

Meta’s tagging system may, however, make it harder for troll campaigns like the one that reportedly emerged from 4chan and encouraged people to bypass Microsoft’s image generator to create explicit photos of pop star Taylor Swift.

Clegg said that Meta was working on ways of including watermarks in open source models.

He also said it developing classifiers that would try to “automatically detect AI-generated content, even if the content lacks invisible markers”.

“These are early days for the spread of AI-generated content,” he said.

“As it becomes more common in the years ahead, there will be debates across society about what should and shouldn’t be done to identify both synthetic and non-synthetic content.

“Industry and regulators may move towards ways of authenticating content that hasn’t been created using AI as well content that has.”

Being proactive on AI image labelling will help keep Meta ahead of regulators who were shocked into action last year thanks to a ChatGPT-inspired rise of generative AI.

A common thread among government efforts to stop cyberspace being flooded with disinformation was the expectation for companies include watermarks for content spat out by AI systems.

The EU demanded this with its landmark AI Act, as did US President Joe Biden in an executive order.

The Australian government has taken a slower approach to AI regulation but said in a recent interim response to a consultation paper that it will work with industry “to develop options for voluntary labelling and watermarking of AI-generated materials”.