Moves by the European Parliament to consider granting some form of legal status to intelligent autonomous robots have renewed questions about issues of liability and responsibility.
Who foots the bill when a robot or an intelligent AI system makes a mistake, causes an accident or damage, or becomes corrupted? The manufacturer, developer, the person controlling it, or the robot itself? Or is it a matter of allocating and apportioning risk and liability?
As autonomy and self-learning capabilities increase, robots and intelligent AI systems will feel less and less like machines and tools, as outlined in my previous column. Self-machine learning capabilities have added complexity to the equation.
Will the spectrum, allocation and apportionment of responsibility keep step with the evolution of self-learning robots and intelligent AI systems? Or will granting ‘electronic rights’ to robots assist with some of these questions?
Last week, I watched YuMi – the robotic maestro – on the news, conducting an orchestra alongside Italian tenor Andrea Bocelli at the First International Festival of Robotics in Pisa. The self-learning robotic maestro could be forgiven for not being able to cope with an impromptu change in tempo from the visually impaired singer or from the orchestra.
On a more serious note, while AI-advocates are probably justified in believing that removing human drivers from the road will eliminate up to 90 per cent of all accidents, no one believes AI will be failsafe. Already we’ve seen traffic accidents caused after reflections from the sun or damage to signs distorted the way the car’s autonomous systems interpreted key data.
Some regulators have challenged the use of the term “auto-pilot”, saying it suggests that drivers don’t need to pay attention when the car is in self-driving mode. So, if a self-driving car has an accident, who is at fault? The driver, the vehicle manufacturer or the company that created the vehicle’s sensors?
Legislators around the globe are wrestling with these very questions.
One of the proposals being considered is for the creation of a mandatory insurance scheme to ensure that victims of incidents involving robots and intelligent AI systems have access to adequate compensation. This might be similar to the mandatory third-party insurance that car owners need to purchase before being able to register a vehicle.
While the EU is leading the way in considering these issues, Australia is watching closely and we will need to make our own decisions around these complex questions on liability and responsibility in the very near future.
So far, attention has focused mainly around accountability involving autonomous cars and drones. However, the rapid adoption of AI and robotics systems into diverse areas of our lives – from business, education, healthcare and communication through to infrastructure, logistics, defence, entertainment and agriculture – means that any laws involving liability will need to consider a broad range of contexts and possibilities.
The world learned an object lesson about how easy it can be to pervert an immature AI when Microsoft launched its Tay chatbot.
In what was described by Microsoft as “an experiment in conversational understanding”, it took less than 24 hours for Twitter users to take Tay from enthusiastically proclaiming that, “humans are super cool”, to having it assimilate the worst of the Internet and parrot a wide array of racist and misogynistic comments.
It was a perfect example of “garbage in, garbage out” and a succinct reminder that machines lack human ‘nous’ and judgement about right and wrong, and may take their cue from whatever attitudes or perspectives they encounter.
Tay was taken offline after less than a day, highlighting some important lessons about machine self-learning.
We are moving rapidly towards a world where robots and intelligent AI systems are connected to the mesh and influenced by social media, the Internet of Things, and Big Data.
Clear boundaries and filters will be needed to prevent these systems from being influenced and corrupted by darker elements, and to protect humans from potential harm. Cybersecurity safeguards must also be designed into the system to mitigate against malicious hacking.
On top of that, we will need to establish specific protections for potential victims of AI-related incidents to give consumers confidence that they will have legal recourse if something goes wrong.
Introducing a robust regulatory framework with relevant input from industry, policy-makers and government would create greater incentive for AI developers and manufacturers to reduce their exposure by building in additional safeguards to minimise the potential risks to humanity.
At the same time, it would pave the way for an evolutionary adoption of self-learning robots and intelligent AI systems. This new paradigm will ultimately require a significant rethink of some of our long established legal principles.