You can rest easy; superintelligent artificial intelligence like HAL 3000 and the Terminator will forever remain fiction.

Speaking at a recent artificial intelligence seminar, Dr Mariarosaria Taddeo, research fellow at the Oxford Internet Institute, said AI will never think for itself.

“There are lots of very good sci-fi movies to watch on a Friday night,” she said.

“But there is not a shred of proper research that supports the idea that AI can become sentient.

“This is technology that behaves as if it were intelligent, but that is nothing to do with creating or deducing.

“It performs calculations; it’s just parsing and analysing data.”

Taddeo defined AI as “nothing but a growing resource of interactive and autonomous and self-learning agencies which can be used to perform tasks that would otherwise require human intelligence to be completed successfully.”

AI can already complete tasks – like accurately diagnosing and predicting cancer or making risk calculations – at a level greatly surpassing the ability of their human counterparts.

We are constantly being lured into believing that AI is cleverer than it appears.

Last year, Google unveiled Duplex, an AI assistant that could make phone calls for restaurant reservations or hairdressing appointments.

Impressively, the AI sounds almost human, can respond to unexpected phrases in real-time, and even has the occasional ‘um’ common in imperfect human speech.

Except there was a catch.

A quarter of bookings made using Duplex were from a real human in a call centre, The New York Times reported.

And some calls that began automated eventually required human intervention.

Machine learning algorithms provide the illusion of independence for AI, but they must be continuously fine-tuned by people who do things like listen into conversations with Siri and Alexa.

Despite the ever-more lifelike – and frequently female – mask placed on AI assistants, Taddeo insists that these computers systems still lack the necessary requirements of being intelligent.

“AI has no intuitions, no feelings, no emotions, nothing that has to do with our creativity, with our unique form of intelligence,” she said.

“This is about imitating human behaviour and doing it so well that you can trick a human into thinking that other forms of intelligence must be there.”

Sorting fact from science fiction

The likes of Elon Musk have warned that AI-run-amok could cause the end of human civilisation.

And there is no denying the power of AI.

In early 2017, the EU considered “creating a specific legal status for robots” in order to make them “electronic persons responsible for making good any damage they may cause”.

A robot named Sophia was even granted citizenship in Saudi Arabia.

But research and development of AI systems has been carried out for decades and no Terminator-like scenario has yet eventuated.

VP of McAfee, Candace Worley thinks the fact we consider the threat of super intelligent AI will ensure its sensible development.

“If we’re intelligent about how we create this tech we’ll allow it to go as far as it can go but ensure that there are appropriate guardrails around it,” Worley said.

“You can ask AI, ‘how do we solve world hunger?’ and one of the viable responses it can have is ‘kill all humanity’ but that is obviously not an ethical or sensible answer.”

Candace Worley, VP of McAfee.

Many countries are already working on ethical frameworks to govern AI production.

As this field of research expands, its own problems become apparent to developers – like the inherent opacity of deep learning systems.

Neural networks can be extremely successful at identifying patterns in data, except the path from query to response is often unknown.

Worley thinks this is another opportunity to regulate AI.

“Humanity will be reticent to allow computer systems get to the point where they direct or control society without having to explain how they came to their conclusions,” she said.

“We’re going to start seeing these systems built in a way such that they output their algorithm modifications.

“Already we’re seeing organisations build in a documentation process while doing research or experimenting with deep learning.

“They want to look at and record the algorithm evolutions all the way through to the output stage.”