The effectiveness of Australia’s social media age ban hinges on the willingness of tech companies to work together and share control of age assurance processes because there is no single solution that works across the board, a landmark government-commissioned report has found.
The review, conducted by the UK-based Age Check Certification Scheme, found that age assurance can be done effectively in Australia in a way that preserves privacy, but despite the “plethora” of options, there are some issues with all solutions on offer.
It comes just over three months before the federal government’s under 16 social media age ban comes into effect and before it has been outlined what “reasonable steps” social media companies will be required to undertake in order to comply with the new law.
The review looked into the effectiveness, reliability and privacy impacts of 60 different age assurance technologies from almost 50 providers, including Apple, Google, Snapchat and TikTok.
These methods included providing government ID documents, using facial recognition to estimate the age of an individual, and using data such as posts to infer the age of a social media user.
It came to similar conclusions for each of these, finding that they are viable and can be effective in Australia, but there are issues across the board and no one-size-fits-all solution.
“We did not find a single ubiquitous solution that would suit all use cases, nor did we find solutions that were guaranteed to be effective in all deployments,” the review found.
These age review tools “cannot be considered infallible”, it found, and many were found to not accurately determine the age of users under certain conditions, while others came with significant privacy and data retention issues.
The government-commissioned report also found that there is no viable way to completely stop young people in Australia from using VPNs to get around the social media age ban.
It found that some technologies aiming to identify users on VPNs are “promising” but raise their own challenges in relation to accuracy, evasion tactics, and user privacy.
‘Fatal flaws’
Overall, the review found that the implementation of age assurance in Australia “depends on the willingness of a small number of dominant tech companies to enable or share control of the processes” and that “coordination among dominant providers is essential if any truly ecosystem-wide age assurance model is to succeed”.
According to the Greens, this report has demonstrated the “fatal flaws” with the social media minimum age reforms, and that it “makes the case against a social media age ban”.
“The risks it identifies include mass surveillance, biometric data collection, Big Tech control, and systematic bias against girls and people of colour, which clearly outweigh any potential benefits,” Greens Digital Rights spokesperson David Shoebridge said.
But Communications Minister Anika Wells said the review shows that there’s “no excuse for social media platforms in this country not to have age verification methods ready for 10 December”.
“The report is the latest piece of evidence showing digital platforms have access to technology to better protect young people from inappropriate content and harm,” Wells said in a statement.
“While there’s no one-size-fits-all solution to age assurance, this trial shows there are many effective options and importantly that user privacy can be safeguarded.”
The different methods on offer
The report found that age verification, when official government documents are used to prove an individual’s age, is “technically mature, privacy-conscious and inclusive” that offers a “viable and trustworthy solution for protecting children and enforcing age-based access controls in Australia’s digital environment”.
When it comes to age estimation, which involves using data such as photos to estimate the age of users, the review found that while it can technically work, it lacks the “fixed, attestable properties required for credential-based identity systems”.
“It is a fundamental misunderstanding of the capabilities of age estimation to test whether it can implement exactly a specific age-restriction without either accepting there will be a margin of error or applying a buffer age to reduce that margin to an acceptable level, acknowledging that false negatives will then be inevitable and alternative methods will be required to correct them,” it said.
Age inference involves using technology to analyse facts about individuals from a trusted source, such as email domain recognition and interaction patterns, to get an idea of their age.
The Australian review found that this method had issues when it was used across a platform, particularly in terms of digital profiling or cross-context inference reuse.
A method of age validation that involves a combination of these methods, used in order with the least invasive coming first, was lauded as a way that “allows services to manage trade-offs dynamically using the lightest effective method wherever possible and only requesting higher-assurance inputs when necessary”.
The review also looked at giving parents control over what content their children can see online, or requiring their consent, but found that these solutions work best as add-ons to other methods and come with privacy and transparency problems.