This Information Age article forms part of a 7-part series on Ethics, covering artificial influencers, facial recognition, IoT, security and more. The series will culminate in an online panel on 11 December. Register to take part in the discussion and send your questions to the ACS Ethics Committee.

A phenomenon which has seen a rapid rise, linked to the widespread use of social media, is that of ‘influencers’.

These are people who have a significant following on social media (such as Instagram, Facebook and Twitter), and endorse products or services to their followers, usually for payment.

Sometimes these influencers don't just endorse products or services, but establish or reinforce trends, such as clothing fashions, lifestyles, fads.

And manufacturers and retailers compete to “sign up” key influencers.

Many people trust these kinds of personal endorsements much more than traditional advertising, so there has been a huge boom in their ‘influence’.

And this despite such endorsements being fairly shallow.

The practice is not new, with TV advertising often using well-known figures to endorse products – but this has moved to a new level with social media.

Apart from any ethical questions arising from the place, use, power and integrity of such influencers, there are also significant issues associated with some practices such as buying ‘likes’ to boost one’s apparent reach and influence.

Shudu. Source: Instragram

Now it has emerged that one or more of these influencers are in fact CGI constructs and not real people at all, but created and therefore at the behest of their creators to endorse whatever their creators wish.

One such is the CGI ‘model’ Lil Miquela, with 1.4 million followers, another is Shudu, touted as ‘the world’s first digital supermodel’, with 147,000 followers.

Lil Miquela. Source: Instagram

The use of AI creations does not stop at influencers, but companies are seriously developing and trialling the use of AI in customer help phone services, utilising voice/language recognition and interactive voice response.

Many of the systems currently deployed are relatively simple, responding to your question/request with, “Did you mean...”, which can be especially annoying when they consistently fail to understand you.

But the technology is advancing rapidly, and soon you may not know if you are speaking to a human or not.

Several novel ethical issues can be identified in the above scenarios.

For instance, should creators of these simulators always tell you that you are seeing/speaking with/following a real person or not?

Does it actually matter if you know that or not?

Are such practices at all deceptive?

What if the ‘person’ gave you advice which turns out to be mistaken: who is responsible and how could you go about taking action if you wanted to?

Could the use of artificial influencers endorsing a product ever be considered to be false advertising?

With regard to buying ‘likes’, are there any circumstances in which this practice might be legitimate?

What ethical issues are at stake here?

There is a range of activities in which you could envisage this being taken much further, for example, providing financial advice, or legal advice, or even medical advice, using avatars.

Such developments would surely bring some of these issues further into focus, and certainly increase the scope for litigation.

But at the same time, when properly developed, ‘educated’ and deployed, it is not difficult to imagine such systems providing enormous benefits to the community or to the world.

Is there any essential difference between such advice-giving avatars and influencers?

What light might thinking about such *possible* more advanced developments throw on dealing with issues raised by the existing technology/phenomenon of artificial influencers?

The following articles provide a starting point for exploring these issues:

1. The Conversation, 7-Aug-18:

2. Wired, 1-May-18:

3. Ethical Psychology, 1-Jun-18:

Prof T Alex Reid, FACS is a member of the ACS Ethics Committee. He is a UWA Honorary Fellow and AARNet Advisor.

Register to take part in our Ethics online discussion panel on 11 December.

Read our entire 2018 Ethics series:

Part 1: Artificial influencers
Part 2: Facial recognition unmasked
Part 3: When IoT goes wrong
Part 4: Who’s to blame for phishing breaches?
Part 5: Could encryption legislation increase risk of being hacked?
Part 6: Would you install a keylogger at your workplace?
Part 7: Do you abide by a professional code of ethics?