Authorities have raised the alarm after a child protection worker using ChatGPT inadvertently released information about a child in a custody case that could have had serious ramifications on the future placement of that child.   

The Department of Families, Fairness and Housing reported the privacy incident to the Victorian Office of the Information Commissioner (OVIC) in December last year, sparking an investigation that has the potential to change the future use of ChatGPT in similar settings.

The worker asked ChatGPT to assist in drafting a Protection Application Report, which was submitted to the Children’s Court to inform decisions about whether a child requires protection.

The investigation found that child protection worker entered a significant amount of personal and delicate information into ChatGPT, including names, and information about risk assessments relating to the child.

These reports are essential in protecting vulnerable children who require court ordered protective intervention to ensure their safety and rights.

They contain a Child Protection workers’ assessment of the risks and needs of the child, and of the parents’ capacity to provide for the child’s ongoing safety and development.

As a result, ChatGPT described the risks posed to a young child living at home with their parents, who had been charged with sexual offences.

The Large Language Model used by ChatGPT didn’t use reasoning or understand context, downplaying the risks to the child.

The results of that investigation, released this week, uncovered a range of indicators of ChatGPT throughout the report, including language not commensurate with employee training and Child Protection guidelines, as well as inappropriate sentence structure.

Of particular concern was the fact that slabs of the report included personal information and described a child’s doll, which had been used by the child’s father for sexual purposes, as a notable strength of the parents in an effort to support the child with age-appropriate toys.

The Protection Application Report also included sensitive information about the mother, father, carer and child in what amounted to unauthorised disclosures and a major breach.

ChatGPT had downplayed the severity of the actual or potential harm to the child, with the potential to impact decisions about the child’s care.

Right when it’s wrong

The investigation report outlined that sometimes people may mistakenly think AI systems only get things wrong occasionally, while otherwise telling the truth.

“We need to understand that AI systems make mistakes, so it is important to verify the accuracy of the output before relying on the model.

“This is especially important when people rely on AI systems to make decisions that affect themselves or others,” OVIC Commissioner Sean Morrison detailed in the report.

The report points out that AI systems aren’t tasked with telling the truth, which may be an issue in a court setting where the safety of children are concerned, as in this case.

“Ethics frameworks around the globe have indicated that AI should not be used in high-risk use cases,“ Morrison wrote.

“It is difficult to imagine a higher-risk use case than child protection, where an incorrect opinion could result in lasting serious harm to a child, parents, or both.”

Fortunately, the case didn’t change the outcome of the child’s case, but it raised alarm bells within the child protection space given the potential harm that could have arisen.

The case is yet another example of both the opportunities and risks that generative AI is having on the world as authorities, governments and companies clamber to create a user guide for genAI that works for their own scenario.

It’s not the first time AI has been blamed for negative outcomes. Lawyers are facing referrals to professional standards boards and people are being accused of ‘misleading the court’ because of the suspected use of ChatGPT.

In many of these cases, judges can’t say for sure how submissions were written but suggest that AI may have played a role because of those increasingly tell-tale signs, like the inclusion of outright fabrication of legal material and the distinctive language used.

While some uses of genAI may be beneficial, this report illustrates there are currently circumstances where the privacy risks involved are extensive, particularly where delicate information is involved.

Investigations like this could have a major impact on the path forward as employees deploy genAI in their job without always having specific approval from their employer to do so.

In the meantime, many organisations including the university sector, are searching for tools to regulate the use of genAI while also encouraging greater trust and transparency around its use.