Artificial intelligence (AI) researchers at one of the world’s top technical universities have been left red-faced after revelations that its widely used image-recognition data set is teaching computer-vision software to classify images using racist and misogynistic slurs.

The 80 Million Tiny Images database, which contains 79.3m low-resolution images assembled by Massachusetts Institute of Technology (MIT) researchers, has been used since 2008 to train image-recognition algorithms to identify objects in photos.

It has now been taken offline and “it will not be put back online,” researchers said.

Each of the database images is associated with one of around 75,000 nouns – allowing computer-vision tools to automatically scan a photo for objects they recognise and describe the scene using an English sentence like ‘A girl rides on a horse in a field’.

Tiny Images, along with a similar database called ImageNet, has been widely used by researchers to improve tools for automatic image management, captioning of photos and even tasks like helping self-driving cars detect pedestrians and read road signs.

Yet until recently, it seems, nobody had looked closely at the nouns – sourced directly from Princeton University’s WordNet ‘lexical database’ – to realise that it was peppered with derogatory words that ended up being used as the descriptors for body parts, animals, in many entries in Tiny Images.

An upcoming paper by UnifyID AI Labs researcher Vinay Uday Prabhu and University College Dublin researcher Abeba Birhane exposed the problem, which analysed the “verifiably pornographic” associations and “ethical transgressions” contained within image-recognition datasets.

The Tiny Images data set was, among other things, describing dark-skinned people and monkeys using the N-word and women in bikinis as ‘whores’.

AI only learns what we teach it

As it turns out, the Tiny Images dataset was created by Googling the 53,464 words in the WordNet database, then automatically scraping the resulting images and converting them into 32-by-32 pixel images – all without any human oversight.

MIT researchers were aghast at the results when contacted by industry journal The Register, which presented them with the new research, and wasted no time pulling the data set offline.

Team members “are greatly concerned by this and apologise to those who may have been affected,” the research team wrote in explaining its decision and asking the computer-vision community “to refrain from using it in future”.

“Biases, offensive and prejudicial images, and derogatory terminology, alienates an important part of our community,” they said, warning that “the presence of such prejudicial images hurts efforts to foster a culture of inclusivity in the computer vision community.”

Given its wide usage in other AI research efforts, walking back the collateral damage of Tiny Images’ inadvertent vulgarity may take time.

Researchers may also face challenges finding a range of replacements in a field with few replacements.
“There aren’t many custom image training sets in the wild,” freeCodeCamp’s Aleksey Bilogur notes, “because frankly they’re a pain to create and share.”

The Tiny Images debacle is likely to fuel increased scrutiny of ImageNet and similar efforts, such as the Canadian Institute for Advanced Research CIFAR-10 and Google’s Google Open Images, which are similarly used by researchers developing next-generation computer-vision applications.

Researchers are already pivoting from the Tiny Images debacle to explore issues around the bias it creates – yet this is far from the first time that machine learning techniques have produced problematic results.

The often-cited mantra ‘garbage in, garbage out’ particularly applies to machine learning, which has previously been blamed for reinforcing prejudicial assumptions contained in their training data.

Amazon famously shelved an AI-based recruiting tool after it was found to be heavily biased against female job applicants, while Microsoft and Amazon recently stopped providing AI technologies to police amidst concerns that AI-base facial recognition was producing unwanted results.

Even as AI rapidly becomes ubiquitous, the industry has redoubled efforts to build ethical and trustworthy AI models that are free from bias and know when they’re being manipulated.

Pegasystems, for one, recently released a tool called Ethical Bias Check that is said to “learn” bias in computer models over time.