Meta has scrapped its responsible artificial intelligence team, a year after the tech giant closed its responsible innovation branch.

The news was first reported by The Information, which said that Meta’s Responsible AI team had been disbanded, with its members to work on other teams, including Generative AI and the company’s AI infrastructure.

The Responsible AI team was formed in 2019 to find potential problems with Facebook parent’s company’s AI approaches, such as whether its models are being trained with adequately diverse information.

Meta has repeatedly claimed that it is focused on the ethical development of AI, and on its website lists its “pillars of responsible AI” as including accountability, transparency, safety, and privacy.

A Meta spokesperson said that the company will “continue to prioritise and invest in safe and responsible AI development”, and that while the team is being split up, its members will “continue to support relevant cross-Meta efforts on responsible AI development and use”.

The latest revelation comes after a significant restructuring of the Responsible AI team and some layoffs earlier this year, with a Business Insider report describing it as a “shell of a team” with not much autonomy.

Last month Meta unveiled a number of new AI products, including a beta version of its conversational assistant, available on WhatsApp, Messenger and Instagram.

Meta AI provides “real-time information and generates photo-realistic images from your text prompts in seconds to share with friends”.

The service is currently only available in the US.

In making this announcement, the social media giant said it was “committed to building responsibly with safety in mind”.

“We are continuing to test and evolve the capabilities of our AIs, and will improve the experience over time through what we learn from your interactions with them,” the Meta announcement said.

“Your direct feedback and the conversations you have with our AIs are core parts of what will help us improve our AI models, and ultimately enhance the experience at scale.”

The disbanding of the AI team comes just after OpenAI ousted its co-founder and chief executive Sam Altman, after a review claimed he was “not consistently candid in his communications” with the board.

Altman has already been hired by Microsoft to lead a new AI team.

In September last year, Meta also disbanded its Responsible Innovation team, which was tasked with identifying potential ethical problems with its products.

With developments in AI technology progressing rapidly and the astronomical rise of generative AI tools such as ChatGPT this year, there are growing concerns over the negative consequences of these technologies.

Big tech companies have attempted to empathise that they are designing AI tools with ethics in mind, but are continually shedding their teams devoted to this.

In March, Microsoft also offloaded its team that was dedicated to teaching its employees how to make AI tools responsible.

The ethics and society team were impacted by the wider layoffs at Microsoft that impacted 10,000 employees, and left the tech giant without a team tasked with ensuring its AI principles are reflected in its product design.

In late October, US President Joe Biden signed a sweeping executive order aiming to address national security, data privacy, public safety and other potential risks associated with AI.

This order will force developers of core AI models to share safety testing results.

Earlier this year an open letter signed by prominent tech figures, including Microsoft, warned that AI carries the same risks to human life as nuclear weapons or pandemics.