This Information Age article forms part of a 7-part series on Ethics, covering artificial influencers, facial recognition, IoT, security and more. The series will culminate in an online panel on 11 December. Register to take part in the discussion and send your questions to the ACS Ethics Committee.

Facial recognition technology has been advancing rapidly over the past decade.

If you’ve ever seen a suggestion on Facebook or another social media platform to tag a face with a suggested name, you’ve seen facial recognition at work.

A wide variety of tech companies have utilised this technology the past several years to turn time-consuming work to catalogue photos into something both instantaneous and useful.

So, what is changing now?

In part it’s the ability of computer vision to get better and faster in recognising people’s faces.

It reflects better cameras, sensors and machine learning capabilities.

It also indicates the advent of larger and larger datasets as more images of people are stored online.

This improvement also reflects the ability to use the cloud to connect all this data and facial recognition technology with live cameras that capture images of people’s faces and seeks to identify them – in more places and in real time.

Advanced technology no longer stands apart from society; it is becoming deeply infused in our personal and professional lives.

This means the potential uses of facial recognition are myriad.

At an elementary level, you might use it to catalogue and search your photos, but that’s just the beginning.

Some uses are already improving security for computer users, like recognising your face instead of requiring a password to access many laptops or iPhones, and in the future, a device like an automated teller machine.

Some emerging uses are both positive and potentially even profound.

Imagine finding a young missing child by recognising her as she is being walked down the street.

Imagine helping the police to identify a terrorist bent on destruction as he walks into the arena where you’re attending a sporting event.

Imagine a smartphone camera and app that tells a person who is blind the name of the individual who has just walked into a room to join a meeting.

But other potential applications are more sobering.

Imagine a government tracking everywhere you walked over the past month without your permission or knowledge.

Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech.

Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first.

Possible issues for consideration in this scenario:

Should law enforcement use of facial recognition be subject to human oversight and controls, including restrictions on the use of unaided facial recognition technology as evidence of an individual’s guilt or innocence of a crime?

Similarly, should we ensure there is civilian oversight and accountability for the use of facial recognition as part of governmental national security technology practices?

What types of legal measures can prevent use of facial recognition for racial profiling and other violations of rights while still permitting the beneficial uses of the technology?

Should use of facial recognition by public authorities or others be subject to minimum performance levels on accuracy?

Should the law require that retailers post visible notice of their use of facial recognition technology in public spaces?

Should the law require that companies obtain prior consent before collecting individuals’ images for facial recognition? If so, in what situations and places should this apply? And what is the appropriate way to ask for and obtain such consent?

Should we ensure that individuals have the right to know what photos have been collected and stored that have been identified with their names and faces?

Should we create processes that afford legal rights to individuals who believe they have been misidentified by a facial recognition system?

Michelle Sandford is the ACS Western Australia Branch Chair.

Register to take part in our Ethics online discussion on 11 December.

Read our entire 2018 Ethics series:

Part 1: Artificial influencers
Part 2: Facial recognition unmasked
Part 3: When IoT goes wrong
Part 4: Who’s to blame for phishing breaches?
Part 5: Could encryption legislation increase risk of being hacked?
Part 6: Would you install a keylogger at your workplace?
Part 7: Do you abide by a professional code of ethics?