Analysis of almost 200 school-endorsed apps found that most start harvesting Australian children’s data within seconds in contravention of developers’ own privacy policies, leaving underage users exposed to significant privacy and security risks.

The findings by University of New South Wales (UNSW) researchers come from an audit of around 200 Android educational apps sourced from school recommendation lists, state Department of Education websites, and the Google Play store.

The results were presented in the paper Analysing Privacy Risks in Children’s Educational Apps in Australia, authored by Dr Rahat Masood, a cybersecurity expert at UNSW, and her colleagues Sicheng Jin, Jung-Sook Lee, and Hye-Young (Helen) Paik.

The research team found that many of the apps collected sensitive data, transmitted it to third parties, and hid behind privacy policies so complex very few parents could understand them.

Masood said they wanted to analyse whether Australia, the federal government, and education departments were aware of the security and privacy risks involved for children as teaching goes digital and relies on tech suppliers.

The ‘illusion of safety’

In some instances apps marketed to young children – using terms such as “Kids,” “Preschool,” or “ABC” – were no safer than general-audience apps, and in some instances had worse alignment between their stated privacy commitments and actual behaviour.

The research paper described this as “the illusion of safety” – child-centric branding which cultivates parental trust without providing genuine protection.

A staggering 76 per cent of apps targeted at children showed at least one form of policy distortion, compared with 67 per cent of general educational titles, according to the study.

The researchers found apps carrying child-friendly names often embedded the same advertising and analytics tools found in commercial entertainment apps, including the same tools used to track adults using the internet.

Their analysis found that 89.3 per cent of apps began transmitting data to third parties before a user had interacted with the app at all.

Opening an app was enough to send device identifiers, location metadata, and other sensitive information to analytics platforms and advertising networks.

“Even if you are not interacting with the app – you just open it and that’s it – it is still transferring lots of data,” Masood said.

“Telemetry data – which mainly refers to tracker-related identifiers and used for the automatic collection and transmission of data to remote servers – despite just opening the app and not using any educational feature, it is still transferring a lot of information that is sensitive and can actually identify your device.”

Feeding Facebook and Google

The UNSW researchers found 83.6 per cent of apps checked transmit persistent identifiers – unique codes that can track a device across sessions and across different apps.

More than two-thirds (67.9 per cent) of the apps contained at least one embedded tracker or analytics tool, such as Google's Firebase, Facebook, or Unity Analytics.

Masood noted that “none of these are needed to actually run the educational app”.

The research team also analysed the privacy policies of the apps and found that just 3 per cent were “fairly easy” to read.

The other 97 per cent required university-level literacy or higher to grasp their meaning.

“Nobody will understand these terminologies and jargon,” she said.

“Comprehension, readability, understandability – all these metrics that we analysed were all very bad.”

On top of that the legal text often doesn’t reflect what the app actually does.

Just a quarter of the apps examined – i.e. about 50 – were fully consistent between their stated privacy policy and their observed behaviour during testing.

“We matched the privacy policy with the dynamic analysis – when the app is running, whether it is collecting the data and whether it is mentioned in the privacy policy or not,” Masood said.

“Only one in four were matching. Some of the policies appear to have been generated using AI tools.”

One app listed in its store description as “Data Not Collected” was observed initialising Firebase analytics and transmitting persistent identifiers from the moment it first launched.

Another that claimed “no ads, no tracking” was found to be sending data to Unity Analytics and Google before a user had done anything.

Calls for crackdown

Masood said the problem started with each state’s Department of Education drawing up its recommended list of apps for educators.

“They look at very high-level details and they don’t download the app – they don’t do the dynamic analysis, they don’t go through the accessibility and readability of the privacy policies,” she said.

Schools are told the apps were assessed through a quality assurance framework, but she said it’s inadequate and teachers are largely unaware of the risks embedded in these tools, while parents assume that if an app has been approved, it is safe.

“[Teachers] are out of resources – first of all – and they don’t know about any security issues. They were just given an app to use and that’s it,” she said.

Masood and her colleagues believe a “traffic light” system would be a better solution as a visual summary of an app’s privacy and security profile, bypassing the legal jargon.

Their research calls for stricter oversight of the “child-directed” app category, arguing that labels such as “Kids” or “Educational” should have a verified technical baseline, rather than being used as a content descriptor.

The also want regulators to prohibit “idle telemetry” – the transmitting data before a user has done anything.