Australian inventors may have given the world the pacemaker, black-box flight recorder, Wi-Fi, Google Maps and the bionic ear – but the global uproar over another recent Australian invention suggests it may be time to add intrusive mass surveillance and the death of privacy to the list.

An Australian of Vietnamese descent, Hoan Ton-That found himself in the midst of a massive online firestorm after a New York Times exposé revealed the increasing use of Clearview AI, a face-recognition application that can identify a person from just one image.

Hoan Ton-That created and developed the app.

It works by using face-recognition algorithms to compare the photo with more than 3 billion photos that the company claims were scraped from Facebook, YouTube, and a myriad of other websites using automated bots.

New York Times’ tests of the software not only identified the journalist who provided the sample photo, but turned up many photos of her that she had never seen before.

The software – which is marketed as “technology to help solve the hardest crimes” – has been labelled “creepy” but has found an enthusiastic audience among American police departments that are paying $14,600 ($US10,000) and up for the ability to upload a photograph of a perpetrator to quickly get a list of possible suspects.

Departments have been able to identify perpetrators of crimes from bystander videos, surveillance cameras and cold-case files, and more than 600 different jurisdictions are claimed among the company’s customers.

Supported by successful investor Peter Thiel – known for his successful backing of Facebook and other firms – Clearview AI, which Ton-That started as a passion project after several failed app projects, threatens to turn every publicly-available online photo into a trap that means almost nobody will be able to avoid.

Ton-That reportedly even tested a way of linking the system with augmented-reality goggles, which would allow the system to identify a person – and, thanks to cross-matching with public databases, their address, friends, hobbies, job, and more – just by looking at them.

Clearview CEO and founder Hoan Ton-That. Photo:

How private is private?

Yet as word spread about the company’s activities this month in the wake of the New York Times’ story, recriminations spread amongst those who have been warning about society’s slide towards a ubiquitous surveillance state.

The UK’s massive network of surveillance cameras is well known but China – by adding face-recognition to its own surveillance networks and more recently developing high-resolution cameras capable of identifying individuals in a crowd – has long been feted as the worst-case scenario.

Australia, however, has joined other countries in creeping towards “dangerous” and increasingly ubiquitous surveillance – with face-recognition systems used in the Gold Coast Commonwealth Games, police using facial-recognition cameras at Victorian police stations, and recent moves such as a national driver-license face-matching database promising instant identification of nearly all adult Australians.

The Morrison government has pushed hard for the Identity-Matching Services Bill 2019, floating the idea of mandatory facial scans for accessing pornography and online gaming sites, but its authors were sent back to the drawing board after a review late last year found it offered too few protections against misuse.

The government says the system is “not intended for mass surveillance” but Electronic Frontiers Australia last year warned that Australian universities, researchers and technology companies “are actively collaborating with oppressive regimes to use technologies like facial recognition against vulnerable groups”.

“We see the same language being used in Australia as used by the Chinese government,” EFA chair Lyndsey Jackson warned at the time.

“It’s justified as being about terrorism, but that’s just a word used to stop people thinking about what’s actually going on….these systems are being imposed on us, in secret.”

Critics of broad face-matching technologies have been loudly and actively protesting the developing technology, with EFA board member Justin Warren arguing in a recent Sydney Criminal Lawyers thought piece that “it’s hard to see how an always-on real time facial recognition isn’t arbitrary interference with our privacy” – putting it in contravention of the anti-interference provisions in Article 12 of the Universal Declaration of Human Rights.

A step too far?

Despite the uproar, use of face-recognition technology is increasing rather than decreasing – fuelled in part by the widespread application of machine-vision and face-recognition technology from cloud providers like Microsoft, Google and Amazon.

Google, sensitive to its challenges with the Chinese government and anticipating a firestorm if it offered easily accessible mass-surveillance tools, has walked a fine line around facial recognition – which, its policy position says, must be “developed and used responsibly”.

Whether or not Clearview AI’s application can be termed ‘responsible’ – the company claims full legal compliance and argues that it “is not a surveillance system and is not built like one” – its widespread availability and early successes have truly let the genie out of the bottle.

Twitter has formally asked Clearview AI to stop using its photos and there’s little doubt that the app’s scraping of billions of photos contravenes the terms of service of Facebook – which, like all social-media sites, prohibits the use of data scrapers for wholesale downloading of its data without “express written permission”.

The Cambridge Analytica scandal led that company to clamp down on external access to its data in 2018, and users likely face serious repercussions for violating the policies – yet open-source and commercial social-media scrapers like Octoparse continue to facilitate mass downloads of what that company calls “the largest and most dynamic dataset about human behaviour”.

The lure of that data set for marketers is undeniable. Yet as technology continues to improve and tools like Clearview AI come into more common usage, its potential for harm will demand ever more scrutiny – before the notion of privacy becomes nothing more than an antiquated concept from yesteryear.