There is a growing move to redefine the acronym AI as augmented -- rather than artificial – intelligence, a nod to its intended role to help humans rather than replace them in work.
But it remains an open question just how willing people will be to take constructive feedback on the way they work from an algorithm or machine.
Researchers at MIT Media Lab are trying to understand that dynamic better.
The researchers want to understand how people’s biases affect how they react, and to get a computer to provide feedback that might help them to change their behaviour.
“We call this human-in-the-loop machine learning,” lab director Joichi Ito told attendees of the IBM World of Watson conference in Las Vegas.
“The idea is can we make the computer smarter by understanding the biases of the humans, but then can we go back to the human being and help them improve?”
Traditionally, Ito said, computer scientists built machine learning models and tweaked them incessantly until the results they produced “roughly match reality”
Only then were people invited to use the models and provide feedback on their relative success.
MIT’s research changes that.
“What we’re doing here is actively putting a human being into the training loop, and we can also create an interface where we’re providing [them] feedback in real time,” Ito said.
In other words, Ito and his team want to create machine learning models and algorithms not just using data; industry professionals are invited to interpret the data.
That interpretation – called a “lens” – is used to help the machine understand the different inferences possible, potentially making it more accurate in recognising the right answer, as well as where the humans trying to work out the right answer are going wrong.
The payback for the professional – if they want it – is the opportunity for self-improvement by taking on board feedback and advice from the machine.
“If you can imagine, if at any time you make a decision, the [machine can] say, ‘by the way, that decision you made is a little bit biased in this direction’,” Ito said.
“It probably depends on the industry and personality of the person whether they care about whether they have biases or not, but … maybe you can improve yourself.
“This idea of bringing humans into the training loop rather than just having them as outside customers is a fairly popular trend in AI that we’re working on quite a bit at the lab.”
Ito believed stock market traders could be open to this type of coaching by a computer.
“If you’re a trader on Wall Street and you’re making trades, you want to know every single bias that you have,” he said.
However, medical doctors, for example, may be more reticent to take feedback or advice from their computers.
The lab is presently involved in joint research to create algorithms that could help diagnose patients’ conditions based on the conversation between the doctor and patient alone.
Though the algorithms are designed to take into account and recognise biases of the doctor, and have the capability to feed that back to them, Ito isn’t sure just how receptive they might be to the idea.
Ry Crozier attended IBM's World of Watson as a guest of IBM.