Australia’s information and privacy regulator has decided not to pursue further legal action against controversial facial recognition company Clearview AI, which it previously found had scraped images of Australians from the internet without consent.

Clearview AI’s software has been used by various international law enforcement agencies and military bodies around the world, including in Australia.

The Office of the Australian Information Commissioner (OAIC) said on Wednesday that the resources needed for further legal action in addition to Clearview AI’s other legal issues were among the reasons for its decision not to pursue the company.

Privacy Commissioner Carly Kind said she had “given extensive consideration to the question of whether the OAIC should invest further resources in scrutinising the actions of Clearview AI”.

She said the OAIC had already investigated the company and the firm had also found itself at the centre of other inquiries around the world, including a class action lawsuit in the United States.

"Considering all the relevant factors, I am not satisfied that further action is warranted in the particular case of Clearview AI at this time,” Kind said.

The OAIC said its 2021 determination that Clearview AI breached Australia’s Privacy Act by collecting facial images and biometrics of Australians still stood.

It said this included an order for those images to be deleted, and for the company to stop collecting images of Australians.

The regulator said recent media reports about Clearview allegedly continuing to collect images of Australians led to questions over whether the company had complied with the government's 2021 order.

Clearview AI, which was co-founded by Australian entrepreneur Hoan Ton-That, did not respond to a request for comment, and has not proved whether it deleted images of Australians or stopped collecting them.


Australian Hoan Ton-That (left) co-founded Clearview AI in 2017. Photo: Clearview AI / Supplied

Greens call for ‘further investigation’

Greens senator David Shoebridge said the suspicion that Clearview AI had not complied with the government’s order “absolutely warrants further investigation”.

"If AI is being used by Clearview to potentially supercharge the harm being done here, then the public deserves to know as soon as possible,” he said in a statement.

Clearview AI’s website states that the company possesses “the largest known database of 50+ billion facial images sourced from public-only web sources, including news media, mugshot websites, public social media, and other open sources”.

In Australia, the firm’s services have been trialled by federal and local police departments, including some without departmental oversight.

The government’s 2021 determination against Clearview AI found the company had “indiscriminately collected images of individuals’ faces from publicly available sources across the internet (including social media)”, the OAIC said.

Clearview AI challenged the determination in the Administrative Appeals Tribunal in the same year, but withdrew the proceedings in August 2023 before the tribunal could make a ruling.

The company had previously told Australian authorities it believed it was not subject to local regulations because “no business or activity is conducted within an Australian jurisdiction”.

Clearview AI was banned from selling its facial recognition services to private entities across the US in 2022, and settled its class action lawsuit in June 2024.

Scraping ‘increasingly common’ in age of generative AI

Privacy Commissioner Carly Kind argued Clearview AI’s practices were “troubling”, and similar actions involving scraping data without consent were becoming “increasingly common due to the drive towards the development of generative artificial intelligence models”.

A recent study of Australians’ attitudes toward facial recognition technology also found some were concerned about tech companies using the data to train their generative AI systems.

Such models, which can generate text, audio, image and video, require large amounts of data for training, and many AI companies have scraped text, images, and videos from the internet.

Meta used Australians’ public Facebook and Instagram posts to help train its AI, while X’s Grok-2 image generator was clearly trained on copyrighted material.

The OAIC said it would soon issue new "guidance for entities seeking to develop and train generative AI models”, including AI chatbots.