Google says it will begin using artificial intelligence systems to estimate the age of its users, as it responds to potential changes to online safety legislation in countries such as Australia and the United States.
Neal Mohan, CEO of Google-owned YouTube, announced the plans last week when he revealed the video streaming giant would use machine learning — a branch of AI — to help it distinguish between adults and younger viewers in 2025.
The goal of the move was “to help provide the best and most age-appropriate experiences and protections”, Mohan said.
Google’s senior vice president of core systems and experiences, Jen Fitzpatrick, added in a blog post the following day that the tech giant would test machine learning models for age estimation in the US this year.
“We’ll bring this technology to more countries over time,” she wrote.
Google did not respond to a request for comment on whether it planned on using the technology in Australia, when contacted by Information Age.
How will Google’s AI age estimation work?
Google's AI systems would estimate a user’s age using data such as how long they have had a Google account, the types of videos they watched on YouTube, and the content they sought out on Google Search, the company told US media.
If the system believed a user may be under 18, their account settings would be automatically tightened but they would be still be able to verify their age with a video selfie, a credit card, or a government ID, Google said.
Users believed to be under 18 would not see YouTube content that was inappropriate for children, and the SafeSearch filter would be turned on to remove explicit content from their Google Search results, the company added.
It is unclear how the system might be utilised by other popular Google services such as Gmail, Google Play, Google Chrome, and Google Assistant.
Social media giant Meta, which owns Facebook, Instagram, Threads, and WhatsApp, announced in September 2024 that it would begin testing AI systems to identify children or teenagers who have lied about their age.
“We train this technology with signals like profile information, when a person’s account was created and interactions with other profiles and content,” the company said in a report after it announced children on Instagram would be placed into dedicated Teen Accounts.
“From those signals, we can begin to make calculations about the likelihood of whether someone is an adult or a teen, even if a teen has listed an adult birthday on their account.”
While Meta said it was working to make sure its AI models were accurate, it also admitted it “may make mistakes” and users would still be able to challenge any AI predictions they believed were inaccurate.
Meta says it will use AI to identify children or teenagers who have lied about their age on social media. Image: Meta / Supplied
AI age estimation in Australian trial
Australia’s controversial banning of under-16s from social media is expected to begin later this year, with similar AI-enabled age estimation technologies to be tested for potential use under the scheme.
Such AI systems were not perfect, however, and techniques may be needed to “avoid false positives based on inherent performance errors”, according to the project plan for Australia’s age assurance trial.
Testing of age estimation methods for the Australian scheme would also include “voice, hand geometry or typing speed analysis”, according to the plan.
A final report from the age assurance trial is expected to be released by mid-2025.
Educational apps, online games, and some messaging services such as Signal are expected to be exempt from Australia's under-16s social media ban.
Meta and TikTok have reportedly criticised the federal government’s plan to give Google’s YouTube an exemption from the ban due to its educational content, according to Capital Brief.
Proposed US social media ban called ‘flawed’
A US bill introduced in 2024 has sought to ban under-13s from holding social media accounts in that country, while also preventing companies from using recommendation algorithms for users under the age of 17.
The so-called Kids Off Social Media Act (KOSMA) would ask social media platforms to use age estimation systems to estimate the age of its users.
US trade association NetChoice, which counts both Google and Meta among its members, has previously called KOSMA “a flawed approach” and argued it “creates serious cybersecurity risks, undermines parental rights and autonomy and violates the [US] Constitution”.
The Kids Online Safety Act (KOSA), first proposed in 2023, previously sought the establishment of guidelines and a duty of care system to force tech companies to design their platforms in ways which mitigate harms to young people.
While the bill passed the US Senate in July 2024, it is yet to be passed by the US House of Representatives and has faced criticism from digital rights and free speech groups, some of which have raised privacy concerns around age estimation systems.
The Australian government’s similar plan to legislate a digital duty of care for Big Tech companies has been delayed until after this year’s federal election.