Google Brain researchers believe the development of artificial intelligence and machine learning generally would benefit from greater diversity among contributors to initiatives.
Project lead Jeffrey Dean said in a Reddit AMA that while he wasn’t worried about a future “AI apocalypse”, he harboured concerns about “AI safety and policy” as well as “about the lack of diversity in the AI research community and in computer science more generally.”
He believed that achieving the Brain team’s mission to ‘make machines intelligent and improve people’s lives’ would require a more diverse set of contributors to ensure machines could think in a humanistic fashion and contribute positively to the world.
“One of the things I really like about our Brain Residency program is that the residents bring a wide range of backgrounds, areas of expertise (e.g. we have physicists, mathematicians, biologists, neuroscientists, electrical engineers, as well as computer scientists), and other kinds of diversity to our research efforts,” Dean said.
“In my experience, whenever you bring people together with different kinds of expertise, different perspectives, etc., you end up achieving things that none of you could do individually, because no one person has the entire skills and perspective necessary.”
As well as diversity of inputs, the Google researchers saw opportunities to improve the amount of training required to get an algorithm up to speed.
“Current machine learning algorithms require vastly more examples to learn from than people do to learn the same task,” senior research scientist Greg Corrado said.
“In a sense, this means that our current machine learning algorithms are wildly ‘inefficient’ data consumers.
“Figuring out how to learn from more with less is a very exciting research area, both inside Google and in the larger research community.”
However, Corrado noted that “the amount of data required to learn to do something useful is highly dependent on the task in question.”
“Building a machine learning system to learn to recognise hand-written digits requires far less than to recognise dog breeds in photos, which in turn requires less than would be required to summarise movie plots simply from watching the movie,” he said.
“For many cool tasks people might what to do, they can easily source sufficient data today.”
Corrado saw healthcare as a key frontier for the application of machine learning and AI techniques.
“My personal conviction is that developing machine learning techniques to improve the availability and accuracy of medical care is the single greatest opportunity for applied machine learning today,” he said.
“We've been working on this for some time both in Brain and at [Google] DeepMind -- for example, we already have great results on applying deep learning to diagnosing diabetic retinopathy, a leading cause of preventable blindness.”
Dean said that Brain researchers tended to focus their efforts in “areas that have significant open research problems, and where solving some of those problems would lead to being able to build significantly more intelligent agents and systems.”
The project divided many of its efforts into “moonshot” themes.
“As an example, one such moonshot is to develop learning algorithms that can truly understand, summarize, and answer questions about long pieces of text (long documents, collections of hundreds of documents, etc.),” he said.
“This sort of work is done without any particular product in mind, although it would obviously be useful in many different kinds of contexts if we were able to do this successfully.”
Other research is performed collaboratively with Google product teams or is simply “driven by curiosity”.
“Because we have many exciting young researchers visiting year round - residents and interns - we also often explore directions that are exciting to the machine learning community at large,” Dean said.
Dean refused to be drawn on exactly how much Google is investing in the Brain project. “We don't reveal specifics about our budget,” he said.