Technology experts and civil liberties advocates are calling for greater transparency and stronger safeguards around the NSW Police Force’s use of AI-powered tools made by Australian intelligence software company Fivecast, detailed in documents provided to Information Age.

They say predictive and risk-scoring systems in policing raise serious concerns about bias, reliability, and impacts on personal liberty.

Professor Toby Walsh, chief scientist at UNSW’s AI Institute, said “any such ‘predictive’ tool has limitations such as bias”.

“We saw some of the challenges with COMPAS [Correctional Offender Management Profiling for Alternative Sanctions],” he said.

“I fear we may repeat them with Fivecast and other similar tools.”

Computer scientist Dr Reuben Kirkham espressed similar concerns, saying the lack of “transparency as to where and how these tools are used” left him with little assurance that algorithmic assessments are properly safeguarded.

He warned there are “limits to the reliability of material published on the internet”.

“For example, someone opposed to a particular protest could create social media traffic which encourages a stronger police response,” Kirkham said.

He added that “more generally, these systems create a risk of innocent people having the police bashing down their door, rather than going after genuine criminals”.

Jonathan Hall Spence, principal solicitor at the Justice and Equity Centre, said “if police are going to be using these tools, they should be clear and transparent about the safeguards they are putting in place”.

“When it comes to policing, where personal liberty is at stake, the risks are extremely high,” he said.

“They should be working with communities and civil society to ensure these tools are used fairly and appropriately.”

Analysing social media activity

The calls follow the emergence of a Fivecast case study describing how its Onyx platform was used to analyse social media activity linked to protests in Western Sydney.

NSW Police use Fivecast software for intelligence purposes, according to Police Minister Yasmin Catley.

She confirmed this in October in response to questions from Greens MLC Abigail Boyd.

Boyd had asked about the use of a “predictive, broad analytical tool” and referred to US reporting on the Department of Homeland Security’s use of Fivecast’s Onyx platform.

“I understand that Fivecast Onyx is currently being used by the Trump administration to scrape and analyse open-source data, including social media, to assist ICE agents in detaining people,” Boyd said during Senate estimates.

“Fivecast say that they have built it in collaboration with Australian authorities in order to build person-of-interest networks,” she said.

“Is that what the New South Wales police are using it for?”

Catley declined to detail specific use cases, saying NSW Police does not “disclose methodology regarding intelligence gathering”.

NSW Police told Information Age it “uses a range of platforms to prevent, disrupt and investigate crimes,” but would not comment on contracts with Fivecast or the development of “predictive and preventative” AI tools.

A police spokesperson did not dispute the authenticity of the Fivecast case study but said “NSWPF does not use any AI enabled tools to predict crime”.

Police declined to clarify whether that statement applied to past systems described in parliamentary responses as predicting “the day, time, type and location of crimes”.

They also would not say whether an AI recommendation system previously linked to youth crime and domestic violence watchlists was shut down when those watchlists ended in 2023.

Predicting the future

Fivecast’s case study describes how its Onyx platform uses automated link analysis and AI risk detectors to assess people and online groups.

The system includes “word/phrase detectors” that scan platforms such as Telegram.

In the example provided, the software identified accounts “encouraging violent action against police” in Western Sydney.

Fivecast says the case study is only shared with organisations it considers to be genuine prospective customers.

The document states that Onyx’s “AI-powered risk detectors uncover keywords, quotes and images” trained to identify “potential risks”.

It outlines how the platform identified what it described as “a protest staging location” using “AI-enabled risk analytics” not mentioned in the public contract notice.

Fivecast’s risk-scoring tools include “AI-enabled assessment of sentiment and emotion in interactions with a POI [person of interest].”

Risk levels are influenced by associations with known POIs, online forums, and links between what the system labels “risky entities,” which are used to predict “key instigators.”

Screenshots in the case study show content categories such as “breaking and entering”, “active shooter”, “bomb threat” and “extremists” as well as a preset folder labelled “protest group terms”.

What the dataset showed

The example dataset contains posts from July and August 2021 captured by a “risk-detector set to identify mentions of unrest and anger associated with the management of COVID”.

“In this case, information obtained from curated access to a network of groups in Fivecast Onyx identified posts discussing potentially violent protest action and exhibiting threatening behaviour towards public health orders,” the case study states.

The system highlighted Telegram groups including ‘Sydney Rally for Freedom’ and ‘Australia Freedom Chat’.

“Automated and ongoing risk assessment of the posted content flagged the announcement of the newly created group ‘Freedom Fighters of Western Sydney’,” the document states.

It adds that “a specific user promoting the channel was linked to the group’s administration, the now removed Telegram account ‘Crypto king australia’.”

Onyx also generated what it described as “clues to lift” the user’s “online anonymity”, including “his association” to a “mate” filmed “stealing a COVID-19 testing station sign”.

The system drew “comparison” to “known POIs residing in Badgerys Creek” – a location mentioned in the group.

Fivecast presents this example as showing how Onyx can detect users “encouraging others to participate” in a “planned violent protest”.

The company says such detection provides “the opportunity to reorientate resources to intercept participants” and helps authorities “respond to events before they happen”.

Fivecast did not respond to requests for comment.