The Distributed Artificial Intelligence Research Institute (DAIR), a new ethical AI research institute, has been founded and launched by the former co-lead of the Google Ethical AI team, Dr Timnit Gebru.

Gebru, who was controversially fired by Google for speaking up about AI biases, said the new institute aims to centre marginalised communities vulnerable to AI harms, such as algorithm bias, and create spaces where independent researchers can set the agenda and conduct AI research based around communities and lived experiences.

Taking an interdisciplinary and global perspective, DAIR has been formed with the belief that the widespread use of AI is not inevitable and that its harms are preventable.

Furthermore, DAIR believes that when AI includes diverse perspectives and deliberate processes it can be beneficial.

“Our research reflects our lived experiences and centres our communities,” Gebru wrote on the organisation’s website.

Gebru wants the organisation to counter what she sees as Big Tech’s pervasive influence on the research, development and deployment of AI.

As an AI expert, she believes the technology has been elevated to superhuman levels and needs to be brought back down to earth.

“When AI research, development and deployment is rooted in people and communities from the start, we can get in front of these harms and create a future that values equity and humanity,” she said.

Departure from Google AI ethics team

With DAIR, Gebru will create an environment that is independent from the structures and systems that incentivise profit over ethics and individual wellbeing when it comes to AI.

It comes on the one-year anniversary of Gebru’s ousting from Google, which made headlines at the time.

Writing in The Guardian. Gebru described the move as “disrespectful”.

It centred around a paper co-written by Gebru and Margaret Mitchell, who was reportedly also let go from Google, around the risks of training AI on large language models.

Gebru believes this practice favours large, wealthy organisations that can bear the huge costs, scrapes large amounts of text that includes and normalises offensive material in the training data and that can ultimately mis-translate and mis-understand meaning, leading to bias, harmful outcomes and unethical use of technology.

DAIR will bring more public interest technologists to the table and strengthen the movement toward inclusive and equitable technology.

The institute has a range of well-funded backers that include the Ford Foundation, the Open Society Foundation and the Kapor Center who all support the goal to build the field of public interest technology, an emerging field focused on harnessing the power of technology for the public good.

Changing the very structure of AI research

Gebru wants to overhaul the way AI research is funded and conducted, to wrest it away from large technology companies and elite universities.

“Only when we change the incentive structure will we see technology that prioritises the wellbeing of citizens – rather than a continued race to figure out how to kill more people more efficiently, or make the most amount of money for a handful of corporations around the world,” Gebru wrote in The Guardian.

DAIR’s research philosophy is to perform long-term community-based and interdisciplinary research rather than the current trend of constantly rushing to publish work every few months.

The specific research directions the institute is interested in pursuing include developing AI for low-resource settings, language technology that supports marginalised communities, data-related work and ensuring it is conducted with robust testing and documentation.

The group believes in a ‘bottom-up' approach to research, supporting ideas initiated by members of the DAIR community, rather than a purely ‘top down’ direction dictated by a few.

“We hope to incentivise researchers to take their time and maximise the impact of their work,” said Gebru.