Google is toying with the idea of embedding machine intelligence in consumer devices like smartphones to enable them to ‘see’ and ‘hear’ like their human users.

The internet giant has struck a deal with chipmaker Movidius that it believes could bring about “future products [with] the ability to understand images and audio with incredible speed and accuracy”.

"The technological advances Google has made in machine intelligence and neural networks are astounding,” Movidius CEO Remi El-Ouazzane said.

“The challenge in embedding this technology into consumer devices boils down to the need for extreme power efficiency.”

Power is a key challenge in creating technology that can come even some way to mimicking human senses in computers.

“We have an amazing visual system built into our heads,” Movidius CTO David Moloney said in a TEDx talk last year.

“About 50 percent of everything our brain does is vision processing. We’ve got about 80 billion neurons in our heads and literally tens of trillions of synapses, and it’s essentially the combination of that, co-located with your eyes, that allows you to do the visual sensing.

“Then your frontal lobes do the interpretation of that at a higher level and the decision making.

“About 20 watts total – the energy requirement of a light bulb – is what’s required to power all of this. So about 10 watts is used for vision processing.

“If you try and duplicate directly using the technology that we have and probably even the technology we’ll have in 10 or 15 years’ time, you’re talking about megawatts of power to do what the brain can do.

“What we can do compared to what is the product of 100s of millions of years of evolution is very limited.”

Moloney cited the long-running Blue Brain Project, which aims to reconstruct a human brain in computer simulation.

“[There] you’re talking about the order of 10 megawatts being required and an extremely large computer being required to duplicate what we can do,” he said.

“And that’s only notional in that no one has actually managed to do this yet.”

Google’s machine learning group head Blaise Agüera y Arcas is confident that the technology as it is currently progressing will produce useful outcomes.

“[The systems now are] starting to be able to solve some of the kinds of problems that our visual system solves like recognising what is in a room,” he said.

“And when we look at how the artificial neurons are being activated by those sensory stimuli, we see a lot of obvious parallels between how that works and how we see real brains working.”

His hope was that, “Instead of us adapting to computers and having to learn their language, computers [might become] more and more intelligent in the sense that they adapt to us.”

There isn’t presently much indication of just how much might be possible by embedding a Movidius chip that runs Google’s deep learning algorithms in a device like a smartphone.

“As the companies continue their collaboration, more details will become available,” the firms said.