This Information Age article forms part of a 7-part series on Ethics, covering artificial influencers, facial recognition, IoT, security and more. The series will culminate in an online panel on 11 December. Register to take part in the discussion and send your questions to the ACS Ethics Committee.

There has been wide discussion – some alarmist, some considered – in of the impacts of AI on industry, economics and the community.

There has been considerably less discussion of the demands that it will place on the professionals involved.

The long chain between AI algorithm designers and the final impacts of their application encourages close technical focus on the tasks involved in designing algorithms, selecting data for tuning, and converting these findings into workable tools.

Only at the last stage are the impacts of society and people very visible, and often the way in which the technical developments are structured make any significant level of transparency very difficult to achieve (Wigan, 2015).

This needs to change if trust is to be built and maintained.

One of the roles of professional societies is to engender trust in the work that is carried out by their members.

Codes of ethics are one of mechanisms used to express expectations, so others can see what they are.

Necessarily, mechanisms to enforce these values are needed or else such codes of ethics fall into disregard and then into disrepute.

We are all too familiar with the public professions of ethics and organisational values publicly proclaimed by corporate bodies, and more recently have realised that internal structures and cultures make any consistent compliance a delicate and often overridden issue.

The recent Royal Commission into the financial industries has made this painfully clear to the community at large that these formal professions of ethical behaviour are neither followed nor enforced.

The critical role of whistle blowers has been repeatedly validated in exposing these failures.

So, too, have the heavy professional and personal penalties that they usually suffer.

The two key factors are once again personal ethics and organisational governance.

Professional societies can no longer be limited to the broad statements of values that comprise their code of ethics as the prices paid by members who stand up to challenge the design or implementation potentials of what they are asked to do are too great.

Clarification of the importance of personal ethical stances of their members, beyond ‘Codes of Ethics’ statements, and practical support for them when this places them in conflict situations with their employers.

It is equally important that organisations improve their governance structures and engagements with such staff to recognise that such genuinely ethical engagements can and will pay off in public confidence and better products and services.

When this is more widely recognised as both necessary and beneficial, the engagement of professional bodies managerial levels of the organisations will change from a purely consultative role to a more active one.

Professor Emeritus Marcus Wigan FACS is a member of the ACS Professional Ethics Committee; Senior Member of IEEE; and Honorary Fellow at the eScholarship Research Center, University of Melbourne.

Register to take part in our Ethics online discussion on 11 December.

Read our entire 2018 Ethics series:

Part 1: Artificial influencers
Part 2: Facial recognition unmasked
Part 3: When IoT goes wrong
Part 4: Who’s to blame for phishing breaches?
Part 5: Could encryption legislation increase risk of being hacked?
Part 6: Would you install a keylogger at your workplace?
Part 7: Do you abide by a professional code of ethics?