Google has fired a prominent AI researcher after she co-authored a paper that was critical of current natural language processing models, such as those developed and used by Google.
Timnit Gebru was technical co-lead of Google’s Ethical Artificial Intelligence Team until last week when her time at Google came to a sudden end.
“Apparently my manager’s manager sent an email [to] my direct reports saying she accepted my resignation,” Gebru said on Twitter.
“I hadn’t resigned – I had asked for simple conditions first and said I would respond when I’m back from vacation. But I guess she decided for me.”
In the lead-up to her firing/resignation, Gebru fought push-back against an article she co-authored about the ethical considerations of natural language models.
According to a post from a group of dissident Googlers, the paper – which has not been made public but has been seen by the MIT Technology Review – was initially greenlit by the tech giant’s internal approval processes.
Apparently my manager’s manager sent an email my direct reports saying she accepted my resignation. I hadn’t resigned—I had asked for simple conditions first and said I would respond when I’m back from vacation. But I guess she decided for me :) that’s the lawyer speak.
— Timnit Gebru (@timnitGebru) December 3, 2020
But with the conference deadline looming and a peer review process already undertaken, Google executives pulled in Gebru and her co-authors for a meeting to demand they either retract the paper or remove their names from it.
“No written feedback was provided from leadership, the authors were not given an opportunity to communicate about the verbalised concerns to anyone involved, and the authors were not provided with an opportunity to revise the paper in light of the feedback,” said the Google Walkout for Real Change group.
The group published an open letter supporting Gebru which has been signed by thousands of people both inside and outside Google.
Ethics in AI
In its explanation of the still hidden research paper, the MIT Technology Review said the authors provide an overview of four main risks implicit in the creation of large language models.
First was the environmental and financial costs of natural language processing in terms of both a high carbon footprint and the monetary cost of cloud computing which “tend to benefit wealthy organisations”.
Next was criticism about how language models tend to take massive amounts of text data to train, usually from the internet, and that these datasets may risk embedding racism and sexism into the models.
The paper also attacked research opportunity costs associated with creating models that manipulate language by mimicking understanding to serve its business needs rather than trying to develop systems that actually understand language.
Finally, the authors took issue with language models that mimic language because of their potential for harm in terms of spreading mass misinformation online – the kind of consideration that OpenAI made when publicly releasing its GPT-2 language model.
Google disagreed with the research paper.
Jeff Dean, Senior Vice President of Google Research and Health, published his response to the incident, saying the paper “had some important gaps” that stopped Google from giving it the stamp of approval.
“For example, it didn’t include important findings on how models can be made more efficient and actually reduce overall environmental impact, and it didn’t take into account some recent work at Google and elsewhere on mitigating bias in language models,” he said.
“Highlighting risks without pointing out methods for researchers and developers to understand and mitigate those risks misses the mark on helping with these problems.”
Internal review
By the time it had hit Dean's desk, the paper had six co-authors and had been reviewed by 28 colleagues for feedback.
Google scientist Nicolas Le Roux said the company’s research review process was designed to check for sensitive material “never for the quality of the literature review”.
Now might be a good time to remind everyone that the easiest way to discriminate is to make stringent rules, then to decide when and for whom to enforce them.
— Nicolas Le Roux (@le_roux_nicolas) December 3, 2020
My submissions were always checked for disclosure of sensitive material, never for the quality of the literature review.
Gebru demanded more information about the hidden review process and sent a scathing email to an internal Google mailing list, Google Brain Women and Allies, in which she aired her frustrations.
“I understand that the only things that mean anything at Google are levels, I’ve seen how my expertise has been completely dismissed,” Gebru said.
“But now there’s an additional layer saying any privileged person can decide that they don’t want your paper out with zero conversation.
“So you’re blocked from adding your voice to the research community—your work which you do on top of the other marginalisation you face here.”
Google’s dominance of artificial intelligence research has attracted further criticism recently when 31 scientists wrote a complaint about the company publishing research that can’t be replicated or scientifically authenticated due to the proprietary technology involved in its creation.