Welcome to the Information Age 6-part series on AI ethics, written by Australia’s foremost experts in this field. Here in Part 6, Toby Walsh applauds Google for its promising start on ethical AI principles.
It’s become fashionable to attack the big technology companies. How dare Facebook provide our data to Cambridge Analytica? What was Google thinking when they demoed the human sounding Duplex assistant? Why can’t Amazon treat its warehouse workers better?
So, when they do the right thing, we should applaud loudly.
For this reason, I want to congratulate Google for releasing its principles for ethical artificial intelligence (AI).
I recognise a technology company leading from the front.
It’s even living up to its updated moto of “Do the Right Thing”.
It's the principles
Google set out the following principles for any AI application that it might build now or in the future:
· to be socially beneficial
· to avoid creating or reinforcing unfair bias
· to be built and tested for safety
· to be accountable to people
· to incorporate privacy design principles
· to uphold high standards of scientific excellence
· to be made available for uses that accord with these principles.
In addition, Google promised not to design or deploy AI in the following application areas:
· technologies that cause or are likely to cause overall harm
· weapons or other technologies whose principal purpose is to injure people
· surveillance technologies that violate internationally accepted norms
· and technologies which contravene international law or principles of human rights.
Of course, I’m disappointed it took controversy over Project Maven to get them to decide to behave responsibly.
Project Maven was the US Department of Defence project that had Google using machine learning to analyse drone video data.
Over a dozen Google employees resigned in protest over Project Maven.
Thousands within and outside the company signed petitions calling on Google to withdraw from the project. (Full disclosure: I was one of these thousands.)
For a project worth less than $10 million per year, why did it take Google so long to see sense?
I hope it wasn’t the reported $250 million that the project might grow into in future years.
Further disappointments
There are three other aspects of Google’s ethical principles that disappoint me.
First, Google did not announce an independent body to monitor compliance with these ethical principles.
It’s not just enough to be ethical. You need to be seen to be so. Quis custodiet custodes?
Second, Google will continue to conduct experiments on the public, but won’t seek independent ethical approval for these experiments.
If I or anyone else at a university runs an experiment involving members of the public, we need to get approval from an ethics review body.
We need to get informed consent (or justify that it is reasonable not to do so).
We need to minimise the risks to the participants.
And we need to argue that the benefits outweigh any downsides.
These safeguards apply for university researchers involved in the pursuit of knowledge.
How can it by that companies involved in the pursuit of profit are held to lower standards?
And third, Google only promised to stay away from building surveillance technologies that violate internationally accepted norms.
Why couldn’t Google hold itself to a higher and less ambiguous standard, like only building surveillance technologies that are necessary and proportional?
Better still, why not simply refuse to work on all surveillance technologies?
Nevertheless, this is a fantastic start.
Google has taken the lead in the ethical use of AI.
Attention needs now to switch to the other technology giants.
For example, Amazon needs to consider safeguards to put in place on the sale of its face recognition technologies by Amazon Web Services.
Artificial Intelligence has great promise to improve our lives.
But like any technology, we need to worry about its ethical use.
Toby Walsh is Scientia Professor of Artificial Intelligence at the UNSW Sydney. He is a member of the ACS AI and Ethics Technical Committee.
Read the entire AI Ethics series
Part 1: Could Cambridge Analytica happen again?
Part 2: Ethics-embedding autonomous systems
Part 3: Why Facebook and Google are b@s^a&d$
Part 4: Artificial intelligence has quietly invaded our workplaces