Welcome to the Information Age 6-part series on AI ethics, written by Australia’s foremost experts in this field. Here in Part 2, Simeon Simoff examines ethics-embedding autonomous systems.

The speed with which AI and data-driven decision-making algorithms and systems are fusing into our everyday lives has raised questions about the possible consequences from the decisions guided by such systems and their impact on our society.

Douglas Rushkoff, an award-winning author and documentarist, who focusses his works on the aspects of contemporary human autonomy in the technologically enabled age, argued in his 2010 book “Program or be programmed[1]” that we must understand the “biases” of the different decision-making technologies, in order to direct them rather than being directed by them and those who developed them. “Choose the latter, and it could be the last real choice you get to make.” states Rushkoff.

The importance of the issue is amplified by the pictures about the dark future of the technology-controlled society, metaphorically similar to what one can perceive passing through Goya (Dark Period) room in Prado Museum.

One way to assist in avoiding the second choice is to embed continuous technical education of our society and find a way to cope with the unavoidable lag behind the technology frontline.

Ethics and autonomous system’s design

The research and development community has taken an alternative path and has been investigating how to bring ethical, moral, cultural and other societal aspects into the development of the interactive autonomous systems in order to trust their decisions.

This is valid for both systems with purpose-built physical bodies, like robots, vehicles, drones, airplanes and systems without such bodies, like trading systems, CRM systems, online advisors.

In the editorial to the recently published special issue on AI and Ethics of the journal of Ethics and Information Technology[2], Virginia Dignum, an Associate Professor at the Faculty of Technology, Policy and Management, Delft University of Technology and Executive Director of the Delft Institute Design for Values, has provided a design-centric compact summary of the multifaceted relation between ethics and AI, which includes:

(i) “Ethics by Design”, focussed on formalisation, representation and embeddedness of reasoning about ethics in the decision-making algorithms, resulting in ethics-embedding autonomous systems;

(ii) “Ethics in Design”, or embedded ethics, focussed on the methods for analysis and evaluation of the ethical impact of the fusion of AI systems into our society; and

(iii) “Ethics for Design”, focused on the methods for ensuring that ethical and societal considerations are taken into account in the research and development work on artificial intelligence algorithms and systems.

Stephen Spielberg’s movie A.I. elegantly demonstrates all three groups of these relations, and takes the discussion a step further – it projects the perspective of artificial intelligence on relationships with humans and, with some extrapolation, an ethics viewpoint of artificial intelligence in man-machine mixed societies.

We focus on the first one.

An ethics-embedding autonomous system

An autonomous AI system, with or without a physical body, collects information from its sensors and communication channels and computes a situation assessment – a model of the state of the world in which it operates.

This assessment informs the decision-making algorithms. An ethics-embedding autonomous system will include ethics extensions in the situation assessment and in the structure of the decision functions.

There seems to be a consensus that the design of ethics-embedding AI architecture will be a hybrid one, which derives its models from different theories and areas of ethics.

In a recent article How Can We Trust a Robot[3], Benjamin Kuipers, who teaches ethics for robotics at the University of Michigan, outlines three main areas of normative ethics as sources for computational formalisms (and formulates more than a dozen open research problems for their creation):

· Deontology, focused on the nature of the action in a situation, where the morality and ethics of an action is assessed against a set of rules, like Isaac Asimov’s original “Three Laws of Robotics”, hence, suitable for rule-based AI methods. For a deontologist, like Immanuel Kant, some actions, like murder, are inherently wrong, no matter what their outcome. Hence, if faced with the classic example at the MIT Moral Machines site – the so-called trolley problem – introduced originally back in the 1960s by the British philosopher Phillippa Foot, the formal implementation of Asimov’s First Law within a deontological framework will lead to self-sacrifice of the robot. Unless the robot has a constraint rule for self-preservation for humanity’s greater good, compliance with the latter introduced Asimov’s “Zeroth Law.”

· Consequentialism, focused on the outcomes of the action in a situation, where the morality and ethics of an action is assessed against the effects of the action to the state of the world in the given situation. Utilitarianism, a form of consequentialism, allows to select the “right” action based on computing and optimising a quantitative utility measure.

· Virtue ethics, where the assessment of the action in a situation is based on what is learnt from previous experience in dealing with such situations. In other words, the moral qualities are depicted through learning. Case-based reasoning, a lazy-learning AI method, is an example of a suitable computational approach as the “wisdom” of the past experiences can be represented as specific examples (cases) that describe the situation, the outcomes and the actions.

In addition to the range of ethical frameworks, real world situations require ethical decision making on a different time scale – the time scale for an autonomous car action in a situation for avoiding potential accident may be different from the time scale for a rescue robot action in a rescue situation after a hurricane.

These and some other technical and ethical challenges of embedding ethics in the decision making by autonomous machines are elegantly explored through thought experiments by a group of researchers from ONERA (Office National d’Etudes et Recherches Aérospatiales, the French national aerospace research centre[4]).

The environment in which the autonomous system operates may assist with the implementation of the ethics embedding systems.

For example, in trading and education systems, the principles of the electronic institution technology[5], developed in the Artificial Intelligence Research Institute of the Spanish Research Council (IIIA, CSIC), offer mechanisms for implementing and reinforcing norms, assessment of situations and assisting agents behaviour.

For an autonomous car, the electronic infrastructure supporting crossings, streets and motorways can be viewed as the physical infrastructure of an electronic institution, through which vehicles interact and inform their actions, and the compliance with specific ethical norms being implemented in the electronic institution.

Environments of this type can provide a way for auditing ethics-embedding algorithms and systems.

Epilogue

Whilst many of the ideas in the ethics-embedding autonomous systems are not necessarily new, their combination targets the unbounded complexity of the real-world situations in terms of ethical evaluations.

Individual compliance with the ethics of a society is a component of a trustworthy behaviour.

A success in the development of computational models of human moral and ethical decision making and their implementation in the systems that fundamentally affect our lives may provide the answer to the question whether humans should trust machines that fundamentally affect their lives.

Let’s get the decision-making machines trustworthy.

Simeon Simoff is Professor of Information Technology and Dean of Computing, Engineering and Mathematics at Western Sydney University. Simeon is a member of the ACS AI and Ethics Technical Committee which is endeavouring to contribute to addressing the challenges explored in this article.

Read the entire AI Ethics series

Part 1: Could Cambridge Analytica happen again?

Part 3: Why Facebook and Google are b@s^a&d$

Part 4: Artificial intelligence has quietly invaded our workplaces

Part 5: Is AI a match made in heaven?

Part 6: Google doing the right thing

References:

[1] Douglas Rushkoff (2010) Program or be Programmed: Ten Commands for a Digital Age, OR Books, New York.

[2] Ethics and Information Technology (2018) 20:1-3.

[3] Benjamin Kuipers (2018). How can we trust a robot? Communications of the ACM, 61 (3), March 2018: 86-95.

[4] Vincent Bonnemains, Claire Saurel, Catherine Tessier (2018). Embedded ethics: some technical and ethical challenges. Ethics and Information Technology 20, 2018:41-58.

[5] http://e-institutions.iiia.csic.es/