Australians overwhelmingly believe police should be able to use facial recognition technology (FRT) to identify suspects from photos and CCTV footage – but a new survey has found near universal agreement that using FRT in workplaces and public spaces is a bridge too far.

Fully 80.2 per cent of respondents to the study – entitled Australian Public Attitudes to Facial Recognition Technology and based around a Monash University-Australian National University survey of 2,006 Australian adults – said it emergency workers should be able to use FRT to identify the bodies of victims of disasters or war.

Significantly, three quarters also police should use the technology to put a name to an image of a criminal suspect – something that was roundly condemned when police were found to be using Clearview AI, but is becoming normalised as Home Affairs consolidates driver’s license images into a database for rapidly identifying suspects from images and live CCTV feeds.

The technology proved its worth earlier this year when Chinese authorities used CCTV footage to identify and track a suspicious elderly man who entered a building shortly before it was burgled – then tracked him down to learn that he was actually 40-something man wearing a silicone mask he had bought online.

Researchers have previously noted that silicone masks “pose a serious threat” to FRT systems, but survey respondents generally trust FRT to do what it says on the box – with 58.7 per cent calling it accurate.

Yet despite their support for first responders, almost nobody wanted FRT used indiscriminately in public situations, with just 15.7 per cent accepting its use by retailers for Minority Report-esque advertisements tailored for individual shoppers.

The new face of the surveillance state

FRT has been contentious in recent years, with the likes of 7-Eleven, Kmart, Bunnings and even major sports stadiums recently flagged amidst growing concern about increasingly pervasive surveillance that, among other things, drove the NSW Government to pause proposed laws that would have used FRT to enforce bans and identify criminals in pokies venues.

“The more the technology was seen to directly affect respondents,” the report notes, “the less likely there was to be support for it…. especially in domains such as recreation or work.”

Significantly, just 16.4 per cent of respondents feel employers should be able to use FRT to monitor employee productivity – a particularly contentious use that has challenged surveillance laws and raised hackles among employees, particularly hybrid workers who bristle at being monitored based on outdated concepts of worker productivity.

While most people understand so-called ‘one-to-one’ FRT – for example, scanning your face to unlock your phone and apps – Monash Professor and chief investigator Mark Andrejevic said the benefits and risks of ‘one-to-many’ FRT are still misunderstood by many.

“The technology is more widespread than many realise and it’s poised to spread rapidly,” he explained, noting that the ability to automatically identify individuals at a distance raises human rights, privacy, and other issues – and that most people don’t really understand FRT’s implications, with just a quarter of survey respondents saying they knew ‘a fair bit’ or ‘a lot’ about it.

“People,” he said, “need a better understanding of how, why and where facial recognition systems operate, how their personal data will be processed, used and stored, what kinds of risks they might confront from participating, and what mechanisms hold the technology accountable.”

Tech giants are the least trusted to use face recognition

Bosses may be on notice about workplace surveillance, but respondents were also concerned about tech giants’ use of FRT for everything from controlling service access to training AI engines, and mapping social networks through large-scale analysis of photos and other data.

Meta, for one, this month agreed to a $2.17 billion ($US1.4 billion) settlement with Texas Attorney General Ken Paxton for what his office called the company’s “unauthorised capture of personal biometric data”.

For more than a decade, Paxton’s office said, Meta ran facial recognition software on “virtually every face contained in the photographs uploaded to Facebook, capturing records of the facial geometry of the people depicted” in violation of Texas state law.

Widespread furore led Facebook to sunset the technology in 2021 and promise to delete more than a billion face image templates – yet the new survey found that tech companies like Google and Facebook are still the least trusted, with just 17 per cent trusting the companies to manage and protect facial recognition data.

“As respondents start to contemplate one-to-many and face analysis uses of the technology,” the study found, “their familiarity and comfort levels diminish.”

“In these use cases, respondents express more concern regarding accuracy and error rates, the capacity for FRT to be utilised for mass surveillance, and how trusted system users are to maintain facial data security/integrity.”