Nine months after Apple dropped plans to scan users’ photo libraries for child sexual abuse material (CSAM), a Monash University research team is taking a different tack by calling for adults to voluntarily contribute 100,000 childhood photos for a new AI project.

Launched by the Monash-based AiLECSlab – a joint venture between the university, Australian Federal Police (AFP) and Westpac Safer Children Safer Communities grant program – the new My Pictures Matter (MPM) project is aiming to use machine learning (ML) techniques to, effectively, mathematically model what a happy child looks like.

A key goal of the project, which has received formal university ethics approval, is to sidestep conventions of machine learning (ML) image analysis that requires image algorithms to be trained on large numbers of the type of image to be analysed.

Researchers building machine-vision systems often train their algorithms on freely available libraries of stock photos, or by scraping images off of social media sites or the Internet – an approach that recently saw Australian firm ClearView AI investigated by privacy commissioners and ordered to delete its images.

Using the same technique for CSAM investigations would require the algorithms to be fed large quantities of child abuse materials – raising serious ethical, moral, and legal questions.

Instead, the Monash team is enlisting large numbers of adults to donate happy childhood photos to MPM and formally consent to their use to train an AI/ML engine using large numbers of images of normal childhood events.

Images will be deidentified and stored securely with access limited to Monash researchers and the AFP, with a ‘data minimisation’ approach meaning that the team will not collect any information from participants apart from their email addresses – which will be stored separately to the images.

Easing the burden on police

Analysis of the crowdsourced and consented ‘safe’ images will deliver what the AiLECSlab researchers believe is the first large-scale, consent-based, ethically managed AI model of its kind in the world.

By establishing what ‘safe’ childhood images look like, the working hypothesis is that an AI that is later presented with a CSAM image during an AFP investigation would detect characteristics that deviate from the norm – flagging the image as potential abuse material.

“It’s good exploratory research and we’ll be investigating how ML technologies can then be applied with other data sets to make assessments of whether visual files might contain unsafe images of children,” Dr Nina Lewis, a research fellow in the university’s Department of Software Systems & Cybersecurity, told Information Age.

Such assessments are typically conducted manually by teams of specially trained AFP officers within bodies such as the Australian Centre to Counter Child Exploitation (ACCCE), on whom the need to view thousands of often brutal images has been documented to take a severe emotional toll and even cause post traumatic stress disorder (PTSD).

The ACCCE received over 33,000 reports of online child exploitation during 2021 alone, with AFP leading senior constable Dr Janis Dalins warning that “reviewing this horrific material can be a slow process and the constant exposure can cause significant psychological distress to investigators.”

The MPM project could minimise this burden, Lewis said: “Anything we can do to not replace humans in this process, but to help triage some of the material and deal with the scale of the problem is really going to be a great help.”

Pushing AI too far?

Yet even as the Monash team explores the efficacy and viability of consent-based image collection, previous projects have struggled to apply similar AI-based image analysis in an ethically effective way.

Apple’s high-profile plans to scan the iCloud photo libraries of its users for potential CSAM were welcomed by child protection advocates, but concerns that the system could become a de facto mass surveillance tool – capable of automatically flagging individuals’ social and professional associations – forced the company to put the plans on hold.

AI has a spotty history when applied to decision-making around child welfare: years ago, for example, the UK’s Metropolitan Police announced plans to tap AI to search seized computing devices for CSAM – yet this month child welfare officials in the US state of Oregon announced they would stop using an AI-powered tool designed to flag families for investigation for potential child neglect.

The algorithm in place had, authorities concluded, produced racially biased results despite its designers’ efforts to avoid bias with a ‘fairness correction’.

Lewis is well aware that similar issues could emerge in the MPM project – which will not accept any images of naked children even if they are in innocuous settings like the bathtub – as the number of contributed images grows.

“We absolutely anticipate that there are going to be imbalances in representation of what we get,” she explained, “and that might be to do with ethnicity, age, race, age of individuals in the photo, and age of the photos themselves.”

“Because of the nature of crowdsourcing, we’re not actually going to know what we get until we get it.

“Developing machine learning technologies obviously takes a lot of data, and we’re really interested in how that can be done ethically.”