It’s been 77 years since science-fiction stalwart Isaac Asimov inspired generations of peers by laying down an ethical code for robots in his seminal Three Laws of Robotics, but artificial intelligence (AI) researchers are now doing the same thing for real as rapidly-improving AI systems challenge long-held ideas about social, business, cultural, and other norms.

In a letter to “Australia’s business academic, technology and government leaders”, a group of high-profile Australians – including former Telstra CEO David Thodey and anthropologist Genevieve Bell – say there has been little progress in how AI should be regulated.

“The lack of a regulatory and ethical framework… leaves the old, dumb laws struggling to keep up,” they wrote.

“Now is the time to recognise the opportunities, concerns and expectations in terms of the potential impact of [AI] on the future of our workforce, income, privacy, agency, bias and accountability,” the letter said, “and to start to address them.”

“This is why we are launching a national dialogue…to debate issues surrounding artificial intelligence in Australia.”

The group – consisting also of Stuart Fuller, James Mabbott, Ross Buckley, Nicole Gillespie, Toby Heap, Mike Foster, Ken Reid, Kate Marshall and Simon O’Dell – announced it will hold an AI forum in Sydney on 7 February.

Staving off the robot apocalypse

The use of AI has exploded as improving algorithms and cloud-based services increase both the sophistication and accessibility of the technology.

AI is being applied to nearly every field of business, social and strategic endeavours: a recent World Economic Forum-PricewaterhouseCoopers report, for example, identified over 80 environmental applications for AI alone.

Yet AI has a massive awareness issue, with many people unaware they are already using it, and many still channelling Terminator-era concerns that AI-powered robots like the humanoid Sophia will take over the world.

Billionaire inventor Elon Musk has been among the highest-profile proponents of AI oversight, warning that humanity is “summoning the demon” by developing machines capable of thought-like behaviour.

Responding to such concerns has become a full-time job for the tech industry, with Google releasing its principles for ethical AI, and joining peers Amazon, Facebook, IBM and Microsoft for the open Partnership on AI.

Experts of every stripe have joined the chorus of voices calling for a reasoned approach to AI, with British Computer Society president Chris Rees calling for industry accountability and the Australian government signalling the need for an AI Ethics Framework as its last year allocated $29.9m towards AI R&D.

Putting AI on the table

Organisations like Accenture have contributed to the discussion with reports like Responsible AI and Robotics: An Ethical Framework – a less apocalyptic perspective that calls for a combination of appropriate laws and regulations; trust “baked into the system at every level’; and an understanding of AI’s impact across a range of spheres.

Work on formulating AI codes of ethics has so far been focused on thinktanks like the Alan Turing Institute, Leverhulme Centre for the Future of Intelligence, the WEF Centre for the Fourth Industrial Revolution, and the Partnership on Artificial Intelligence to Benefit People and Society.

Yet as the technology develops, professional associations like the Institute of Electrical and Electronics Engineers (IEEE) and Royal Society have joined the crush, working on standards and certifications to manage the explosion of artificially intelligent tools and autonomous systems.

Accenture highlights numerous areas where AI can pose ethical challenges; including decision-making and liability; transparency; bias; preservation of human values; protection of data and IP; social dislocation; and cybersecurity.

Many of these issues have already created challenges for organisations implementing AI to streamline some of their existing processes: e-commerce behemoth Amazon, for example, recently stopped using an AI recruiting tool after it proved to be biased against women.