Welcome to the Information Age 6-part series on AI ethics, written by Australia’s foremost experts in this field. Here in Part 4, Peter Leonard looks at issues surrounding the use of AI in the workplace.

Autonomous robots, and semi-autonomous agents likely to harm humans, rightly inspire fear and calls for new regulation.

A driverless vehicle is not much different to the robots envisaged for regulation by Isaac Asimov’s laws of robotics, but with the added complexity that sometimes robot cars will face the trolley-car dilemma.

When faced with a decision when taking each available choice will cause harm to humans, how do you assess the magnitude of harm of each choice, so as to program the robot to take that choice?

Semi-autonomous agents also raise issues of complicity and moral culpability.

It was reported that Google executives announced to company staff just last Friday that Google won’t renew its contract to work on Project Maven, the controversial Pentagon program designed to provide the military with artificial intelligence technology used to help drone operators identify images on the battlefield.

In one sense, even this ethical question is easy.

Designing a weapon may be less morally culpable than operating a drone to fulfil its killing mission – the designer might expect that the drone will be deployed to take out a properly assessed and appropriate target in a ‘just war’.

But at what point should a designer of outputs designed to be instruments of war consider that the risk of morally reprehensible uses outweighs the benefits of use of those outputs on morally just missions?

And who on the design team can be expected to make these challenging assessments?

These questions are rightly attracting much attention from ethicists and lawyers.

But AI already in use creates more immediate issues.

Most AI today is used to aid humans to automate mundane or routine tasks and decisions, to identify anomalies or unusual cases that requiring active human review, and then present filtered information that aids a human decision.

On first glance, this does not look ethically or legally challenging.

A human still calls the shots, and the AI does the easy stuff and gives the hard stuff to the human.

And the decision maker in the business decides whether to trust the AI to make the decision, or to call the question out for a human decision.

So, what’s the problem?

The first problem is that the decision maker’s faith in the algorithms driving the AI may well be misplaced.

Humans are fallible and biased, but managers and other decision makers are improving their understanding of likely human misperceptions and bad heuristics.

The pioneering work of Daniel Kahneman and Amos Tvesky in behavioural psychology over twenty years ago has now permeated many disciplines, including management theory.

We now have a reasonable handle on how good humans make bad decisions.

However, we’ve just started down the journey of building understanding of managers about how to manage AI.

Many businesses and government agencies are not yet familiar with evaluation of data analytics products, or with managing data scientists.

Given the continuing acute shortage of experienced data scientists, this skills deficit is likely to remain a problem in Australia for years to come.

And like many shiny new products oversold by vendors to excited buyers, AI buyers may not be well qualified to assess the shortcomings of an AI solution.

Many boards of directors and CEOs are rushing their businesses into AI without properly understanding its current limitations.

Over-reliance upon early AI is a likely outcome.

Another problem is opacity of many AI applications. Unless transparency is engineered into machine learning, the algorithms may not be properly understood by decision makers, or unable to be cross-examined when things go wrong.

The algorithms driving the AI may be biased or wrong.

The test data used to generate the algorithms may be too narrow or not deep enough, so the algorithm is great with decisions at the centre of the bell curve, but unreliable over a broad range of data sets presented for decisions.

The use of AI may be quite different to the anticipated use environment for which it was developed.

The algorithm may entrench historical outcomes, rather than facilitate better outcomes.

And many evaluations and deployments of AI do not ask the appropriate questions.

The AI may have been properly specified by the supplier, but then let loose for use in a way that is inappropriate for the particular application.

And today, many applications of AI escape careful review as to fairness of outcomes, because ‘fairness review’ is not required as a matter of standard business practice.

Hello GDPR

Contrast the European Union: the General Data Protection Regulation (as has just entered into operation – remember all those notices!) states that individuals should not be subject to a decision based only on automated processing that is legally binding, or which has a significant effect on them, unless certain conditions are met.

The processing must be necessary to enter into or perform a contract, or the individual has provided fully informed consent, or the processing is allowed by a national law with suitable safeguards to protect the individual’s rights, freedoms and legitimate interests.

For all such decision-making, except that expressly based on a law, the data subject must be at informed of the logic involved in the decision-making process, their right to human intervention, the potential consequences of the processing and their right to contest the decision reached.

Although this new GDPR requirement is not well drafted and legal uncertainty will lead to lots of problems, it is a good start towards building expectations about careful evaluation of reliability of AI, and transparency as to the use of AI.

Yes, this requirement will burden EU businesses and government agencies, but it will also control bad actors and help nurture trust of citizens and consumers, so increasing social licence for good applications of AI.

And it is worth noting that (as in Australia) the GDPR requires that all processing of personal data be fair, lawful and carried out in a transparent manner.

Fairness includes that the method of collection personal data must not be misleading.

And the processing must be fair to all data subjects.

In an age of social media, it may be that automated collection of information on persons who are social media connections might be unfair to those persons, even if not unfair to the Facebook user that opted in to that collection: Facebook and would-be Cambridge Analyticas beware!

Legal ramifications

And the lawyers are still trying to catch up.

Among many legal issues raised by AI deployments, two fundamental issues are not yet well understood.

First, the Australian Consumer Law and other statutes impose responsibilities on both suppliers and business users of AI products.

A provider of services to consumers is liable for services provided without due case and skill and for services made available for a reasonably expected purpose where those services are not fit for that purpose.

A provider of products is also responsible for products which have a safety defect.

Unless the underlying reasoning of the AI is sufficiently transparent and capable of being proven in court, a defendant AI user may have liability exposure to a consumer plaintiff that cannot be sheeted home to an upstream supplier of faulty AI.

Second, the relevant law is not well-equipped to deal with counter-factuals.

The relevant legal question is not whether an AI application performs statistically better than humans.

Rather, the question is whether for a particular AI decision in particular circumstances that a plaintiff has put before the court, the AI user was reasonable in relying upon the AI.

Sometimes that may lead to a counter-factual analysis of whether a human would have done better, but in many cases we can’t be sure that this approach will be accepted in the courts.

AI is unstoppable.

Law and ethics will need to adapt to accommodate good AI.

We may expect plenty of issues arising from bad AI decisions unless businesses and government agencies move ahead of the law to carefully evaluate AI before applying it – and they then ensure that AI is used fairly and responsibly.

Peter Leonard of Data Synergies is a lawyer and business consultant to data driven businesses and government agencies. Peter chairs the Australian Computer Society’s AI Ethics Technical Committee.

Read the entire AI Ethics series

Part 1: Could Cambridge Analytica happen again?

Part 2: Ethics-embedding autonomous systems

Part 3: Why Facebook and Google are b@s^a&d$

Part 5: Is AI a match made in heaven?

Part 6: Google doing the right thing