The Australian Federal Police have confirmed seven of its officers began trialling the controversial facial recognition tool, Clearview AI, without departmental oversight.

Responding to questions taken on notice from a parliamentary inquiry, the nation’s top police department admitted that a its officers began testing the software but said that it was not used as an enterprise product.

“Between 2 November 2019 and 22 January 2020, members of the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) registered for a free trial of the Clearview AI facial recognition tool and conducted a limited pilot of the system in order to ascertain its suitability,” the AFP said.

“The trial was to assess the capability of the Clearview AI system in the context of countering child exploitation.”

During that trial, seven AFP officers accessed Clearview and searched for “known individuals, and unknown individuals” relating to child exploitation cases.

Other international law enforcement agencies apparently tipped the ACCCE off about the “potential value of the Clearview AI product in identifying victims or perpetrators of child abuse” which sparked the trial.

Indeed, a New York Times investigation that brought the shadowy workings of Clearview AI into the public eye demonstrated the near-miraculous way Clearview identified people.

One US law enforcement officer told the Times they tracked down a man who appeared in a child exploitation video because the software found his face reflected in the mirror of another person’s gym selfie.

Accountability concerns

Despite Clearview’s apparent usefulness, the efficacy of its system – along with broader security and privacy issues – raises some serious concerns about whether or not the tool is robust enough to be used by Australian law enforcement.

The AFP revealed that the Office of Australian Information Commissioner investigated the department’s use of Clearview, issuing a “notice to produce” information documents related to its use of Clearview.

And in a joint statement, Labor shadow ministers Kristina Keneally, Mark Dreyfus, and Tim Watts called Clearview AI “deeply problematic”.

“The use by AFP officers of private services to conduct official AFP investigations in the absence of any formal agreement or assessment as to the system’s integrity or security is concerning, especially given the recent hacking of Clearview,” the statement said.

In late February, Clearview AI’s client list was pulled off its systems by hackers and then shared with journalists at Buzzfeed.

The list supposedly included email addresses from police departments around the country but, when questioned, departments either denied using the service or refused to comment on the “operational methodology” of their facial recognition systems.

On the same day as these revelations, AFP commissioner, Reece Kershaw, was fronting a parliamentary committee into the review of the mandatory metadata retention regime.

What ought to have been a straightforward reprisal of the AFP’s inquiry submission took a difficult turn for Kershaw when Shadow Attorney General, Mark Dreyfus, asked point blank whether the AFP was using Clearview AI.

“Does the AFP use this technology?” Dreyfus asked.

“I have asked that question today myself, off the back of media reporting," Kershaw replied.

Evidently the AFP Commissioner had no idea whether or not the country’s top police officers were inputting pictures of Australian faces into an unsanctioned facial-matching service that draws on a database of images scraped from the internet without users’ consent until he read about it on Buzzfeed.

Unscientific

Clearview AI supposedly conducted its own independent accuracy test in October last year – but its validity has been criticised for a lack of scientific rigour.

Swinburne University lecturer, Jake Goldenfein, said the test “does not represent the type of testing that an entity seeking to produce an accountable system would undertake” as it didn’t include work by state standards bodies or university researchers.

“Instead,” Goldenfein wrote, “Clearview AI gave the task to a trio composed of a retired judge turned private attorney, an urban policy analyst who wrote some open source software in the 1990s, and a former computer science professor who is now a Silicon Valley entrepreneur.”

Goldenfein also said Australian police use of Clearview AI was concerning given the company's apparent lack of oversight.

“Beyond the ethical arguments around facial recognition, Clearview AI reveals Australian law enforcement agencies have such limited technical and organisational accountability that we should be questioning their competency even to evaluate, let alone use, this kind of technology.”