Controversial facial recognition company Clearview AI – which provides law enforcement with a tool that identifies criminal suspects using a database of scraped photos – had its entire list of clients stolen in a recent data breach.
The Daily Beast got hold of a notification Clearview sent to its customers warning of unauthorised access to files that included the company’s list of clients, how many accounts each client had, and how many searches the clients conducted using Clearview’s facial recognition tool.
Clearview’s attorney, Tor Ekeland, told the Daily Beast that the hacker didn’t get into its massive database of facial images.
“Security is Clearview’s top priority. Unfortunately, data breaches are part of life in the 21st century,” Ekeland said.
“Our servers were never accessed. We patched the flaw and continue to work to strengthen our security.”
US Senator, Ron Wyden, was highly critical of Clearview after the news of the data breach broke, telling Gizmodo the breach demonstrates that Clearview can’t be trusted.
“Shrugging and saying data breaches happen is cold comfort for Americans who could have their information spilled out to hackers without their consent or knowledge,” Wyden said.
“How we can trust a company with massive privacy responsibilities when it can’t even protect its own corporate data?”
“Companies that scoop up and market vast troves of information, including facial recognition products, should be held accountable if they don’t keep that information safe.”
All eyes on Clearview
Clearview first came into the spotlight last month when a New York Times investigation shed light on the secretive company’s practices.
Where facial recognition systems used by law enforcement tend to draw on official images – like those from passports or driver’s licenses – Clearview’s database contained billions of images scraped from all over the internet.
Clearview has come under fire since the Times exposé.
Last week, Canadian privacy regulators launched an investigation into the company, not long after Toronto police admitted they had been using the AI tool.
Even though Clearview’s Australian founder Hoan Ton-That insisted that the AI tool only uses publicly available images, Facebook, LinkedIn, and Twitter have denounced the company’s practices and told it to stop scraping their data.
For law enforcement, however, the benefits of the tool have been hard to ignore.
Suspects have been identified near-instantly using the program, and in one case a man who appeared in a child exploitation video was tracked down because the program found his face reflected in the mirror of another person’s gym selfie.
So far there is no evidence that it is in use by Australian law enforcement and the Australian Federal Police has outright said it does not use the AI.