Microsoft’s artificial intelligence technology has again run into trouble and is facing accusations of racial bias after it confused the mixed-race singers of a British pop group.

The tech giant replaced the humans that curated the homepage with its artificial technology last month, with the automated tech used to select and post articles from other news sources.

The robotic editors earlier this week selected a news article about Little Mix singer Jade Thirlwall and her experiences with racism to post on the homepage but used a photo of her bandmate Leigh-Anne Pinnock as the feature image.

Thirlwall said this sort of apparent mix up is common.

“It offends me that you couldn’t differentiate the two women of colour out of four members of a group...DO BETTER,” Thirlwall tweeted in response.

Soon after, humans at Microsoft were being forced to take down stories from about the AI gaff that were being automatically posted by the very same AI system.

Microsoft laid off 50 journalists in the US and 27 in the UK who had previously been tasked with selecting the articles to feature from other sources on the MSN homepage.

The mistake with the image of Little Mix is the exact kind of error these people were tasked with identifying and preventing.

Oops, we did it…again

It’s not the first time Microsoft’s AI technology has run into trouble, with its social media chatbot turning into a racist monstrosity in less than a day in 2016.

A Microsoft spokesperson said the mistake was quickly rectified.

“As soon as we became aware of this issue, we immediately took action to resolve it and have replaced the incorrect image,” the spokesperson said.

“In testing a new feature to select an alternate image, rather than defaulting to the first photo, a different image on the page of the original article was paired with the headline of the piece.

“This made it erroneously appear as though the headline was a caption for the picture. As soon as we became aware of this issue, we immediately took action to resolve it, replaced the incorrect image and turned off this new feature.”

The Guardian, which first reported the mistake, also reported that the remaining human staff at MSN had been told that negative coverage of the AI would likely be automatically posted on the homepage by the AI itself, and that these stories should be taken down.

“Microsoft’s artificial intelligence news app is now swamped with stories selected by the news robot about the news robot backfiring,” Guardian editor Jim Waterson tweeted.

Artificial intelligence has regularly been proven to have racist biases and contributing to further inequality.

In 2016, Microsoft famously unveiled a social media chatbot named Tay. But less than 24 hours after launch, the bot began posting racist statements and was promptly pulled offline. The company attempted to update the bot to prevent it from learning racist statements, but was forced to pull it down again soon after.

Last month an American tech company unveiled a new tool it said will help to identify discriminatory behaviour by AI technology before it goes out to the public.

The service is meant to weed out discriminatory behaviour in AI being used by companies to sell products to consumers, in an effort to ensure this technology is used in a fair way.