Facebook CEO Mark Zuckerberg set himself two goals for 2016: run 365 miles (587 km) and build an artificial intelligence (AI) system to control his home.
He achieved both – though he notes the running took far longer than the coding. However, it’s his experience creating in-home AI that will be of interest to many techs and watchers of the AI space.
Zuckerberg has coined his system Jarvis – a hat-tip to Jarvis in Iron Man – and managed to get it to a pretty sophisticated level in just one year.
In doing so, he learned “more than [he] expected” about what the current generation of AI systems are and aren’t good at, and consequently where improvement will be required to make in-home AI a reality.
There are, of course, ways to introduce some level of AI into the home, through products like Amazon’s Echo. Information Age has previously covered some of the trials and dilemmas experienced by users.
However, Zuckerberg set out to connect a greater number of his home’s systems and electronics together in a way that they could be operated using AI.
“So far this year, I've built a simple AI that I can talk to on my phone and computer, that can control my home, including lights, temperature, appliances, music and security, that learns my tastes and patterns, that can learn new words and concepts, and that can even entertain [my daughter] Max,” Zuckerberg said.
“But before I could build any AI, I first needed to write code to connect these systems, which all speak different languages and protocols.”
That presented quite an initial hurdle and taught Zuckerberg that in order “to be able to control everything in homes for more people, we need more devices to be connected and the industry needs to develop common APIs and standards for the devices to talk to each other.”
“Once I wrote the code so my computer could control my home, the next step was making it so I could talk to my computer and home the way I'd talk to anyone else,” he said.
He built both text and voice interfaces to communicate with Jarvis; the system translates the speech to text for it to read.
To his surprise – and running counter to current voice-activated personal assistant products - he spent more time texting commands rather than voicing them.
“When I have the choice of either speaking or texting, I text much more than I would have expected,” Zuckerberg said.
“This is for a number of reasons, but mostly it feels less disturbing to people around me.
“If I'm doing something that relates to them, like playing music for all of us, then speaking feels fine, but most of the time text feels more appropriate.
“Similarly, when Jarvis communicates with me, I'd much rather receive that over text message than voice. That's because voice can be disruptive and text gives you more control of when you want to look at it. Even when I speak to Jarvis, if I'm using my phone, I often prefer it to text or display its response.”
Zuckerberg saw his experience as suggestive that future AI products “cannot be solely focused on voice and will need a private messaging interface as well.”
Another thing that limits the use of personal assistants like the Echo is that you need to buy its specific hardware and be near it when you want to ask it something.
Zuckerberg instead runs Jarvis as a custom app on his phone, giving him a portability that it turned out he found necessary.
“My dedicated Jarvis app lets me put my phone on a desk and just have it listen,” he said.
“I could also put a number of phones with the Jarvis app around my home so I could talk to Jarvis in any room. That seems similar to Amazon's vision with Echo, but in my experience, it's surprising how frequently I want to communicate with Jarvis when I'm not home, so having the phone be the primary interface rather than a home device seems critical.”