Tech giants Microsoft and Amazon will not offer facial recognition technology to police departments as until better regulation is developed around how emerging technologies impact civil rights.

The announcements came in quick succession this week and shortly follow IBM’s move to completely stop its own facial recognition offerings.

IBM CEO, Arvind Krishna, published an open letter to US congress on Monday condemning the use of AI and facial recognition for surveillance.

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms,” Krishna said.

“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

Two days later, Amazon published a blog post of its own announcing a “one-year moratorium” on the use of its unfortunately named ‘Rekognition’ technology.

“We will continue to allow organisations like Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics to use Amazon Rekognition to help rescue human trafficking victims and reunite missing children with their families,” Amazon said in the post.

“We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge.”

Amazon had been public in its push for law enforcement to use Rekognition and other AWS services.

In July last year, Amazon touted the virtues of Amazon Rekognition as a tool for tracking down missing persons and victims of sex trafficking.

At the bottom of that post – below a bolded call for the reader to “learn more” about the “beneficial uses of Amazon Rekognition” – is a link to another blog post from October 2018 in which Amazon offers numerous use-cases for AWS in a law enforcement context.

Finally, Microsoft released a statement on Friday also calling for US laws to catch up with innovation.

"We do not sell our facial recognition technology to US police departments today, and until there is a strong national law grounded in human rights, we will not sell this technology to police," the statement said.

Problematic AI

The announcements come amid weeks of turmoil and anti-police sentiments as protests rage across the US following the killing of George Floyd by then-members the Minneapolis Police.

Protesters in France, the UK, and Australia have taken to the streets in solidarity with US marches that are expressing frustration and rage at inequality baked into the political economy.

As the bubbling discontent reached tipping point, Silicon Valley technology companies finally responded to the criticism that AI systems inherit systemic bias from the data that powers them.

Facial recognition systems tend to have a poor history with correctly identifying darker-skinned people. An AI used to make criminal sentencing more ‘objective’ in the US was found to have a bias against African Americans.

Amazon even had to stop using a recruitment AI after it was discovered to be biased against women.

And the Chinese government has used facial recognition technology and AI tools to monitor and control the population of Uyghur Muslims in the Western province of Xinjiang.

In December, the Australian Human Rights Commission called for safeguards to be created in order to make sure AI is developed carefully and responsibly.

“Emerging technologies can bring great societal benefits, but people are starting to realise their personal information can also be used against them,” Human Rights Commissioner, Ed Santow, said.

“In the last year we’ve seen troubling examples of emerging technology being ‘beta tested’ on vulnerable members of our community, and we’ve seen AI used to make high-stakes decisions that have had serious human rights impacts on individuals both in Australia and overseas.”

It could be already too late

Despite displays of goodwill from Silicon Valley, the cat may already be out of the bag when it comes to the use and misuse of AI and facial recognition.

Clandestine facial recognition company Clearview AI leveraged the data collection regimes of big tech to build a tool that is now being used by police departments in the US and has been trialled by law enforcement around the world – including in Australia.

Australian CEO of Clearview AI, Hoan Ton-That, has said his company’s facial recognition tool is “not intended to be used as a surveillance tool relating to protests” and that Clearview AI can actually be beneficial for stopping wrongful arrests.

“As a person of mixed race, this is especially important to me,” Ton-That said in a recent statement.

"We are very encouraged that our technology has proven accurate in the field and has helped prevent the wrongful identification of people of colour."

Even in the EU, which has some of the world’s toughest data protection rules, the use of facial recognition technology like Clearview AI is still a work in progress.

On Wednesday, a taskforce from the European Data Protection Board (EDPA), said the lawfulness of police departments using Clearview AI “cannot be ascertained”.

“The EDPB is therefore of the opinion that the use of a service such as Clearview AI by law enforcement authorities in the European Union would, as it stands, likely not be consistent with the EU data protection regime,” the taskforce found.

“The EDPB refers to its guidelines on the processing of personal data through video devices and announces upcoming work on the use of facial recognition technology by law enforcement authorities.”