There have been kid-free cruises and kid-free hotels for decades, but now social media is joining the trend – with under-16 social media users to be barred from a range of popular platforms as the government’s new laws take effect after years of preparation.

But will it work? And what does “work” even look like?

Those questions flew thick and fast as Meta began dumping youths aged under 16 years as many tested workarounds – for example, by using VPNs to make it seem like they’re from another country, or exploiting parental largesse to bypass AI age-verification systems.

The government is realistic about the legislation’s early impact – yet even where parents are helping children bypass age checks, their involvement will prompt new conversations and give parents a heads-up about what their children are accessing.

This and other yet-to-be-seen effects of the ban are part of “a significant shift in how Australia approaches children’s safety online,” ACS CEO Josh Griggs said while warning that “banning social media will not be a silver bullet.”

“We need young Australians to feel safe and empowered online,” he added.

“Technology is a permanent part of their lives [and] we need to match strong safeguards with digital literacy, parental support, and ongoing collaboration between government and industry.”

What is actually happening today?

Wednesday, 10 December sees the activation of the Online Safety (Age-Restricted Social Media Platforms) Rules 2025, which were created by amendments to the Online Safety Act 2021 that were passed in Parliament last November after a one-day consultation period.

The rules require social media platforms – the term ‘social media’ has many definitions but generally targets platforms that allow users to post content, and engage directly or in groups with other users – to identify and ban users younger than 16.

Parents and teens are exempt from punishment and the government has put the onus on Big Tech to find ways to make the legislation work, with penalties of up to $49.5 million for platforms that consistently fail to stop children from having accounts.

Who is banned from social media, and for how long?

In theory, social media platforms already ban users under 13 thanks to the US COPPA law, which means that in theory, the new restrictions mainly affect 13, 14, and 15 year olds.

However, a recent eSafety study found social media services make only cursory checks, with 80 per cent of children aged 8 to 12 – equivalent to 1.3 million children – using one or more service last year and 36 per cent doing so with their own accounts.

That means the new rules apply to millions of Australian children, who will have to fill their spare time with sports, reading, family time, pets, and other hobbies until they turn 16.

How can they tell how old I am?

Despite trialling age verification technologies with mixed results, the government has not mandated any particular method for determining users’ age – with some vendors arguing that Apple iOS or Android devices should handle age verification.

Major sites are using AI age estimation, which take several selfies at different angles and use AI algorithms like k-ID, Yoti, and Epic Games’s Kids Web Services to estimate your age.

The social media ban for under-16s comes into effect from today. Source: eSafety

Others are using simple maths: if you’ve had an active account since 2015, for example, it’s safe to assume you’re over 16.

AI is also being used to analyse your viewing patterns, communications history, likes, and other indicators to make an educated guess as to whether you’re under 16: few 12-year-olds, for example, are likely to be watching videos of opera singers or bathroom caulking how-tos.

If AI wrongly identifies you as being under 16, don’t complain to the TIO; most services offer an age verification process that lets you use formal ID to prove your age, so check the service’s website for your options.

What social media platforms are affected?

eSafety consulted heavily to find out the most important sites to subject to what it calls the Social Media Minimum Age (SMMA) obligation – but deciding which sites count as social media has been hotly debated.

Facebook, Instagram, Threads, Snapchat, TikTok, and X were obvious targets but eSafety wavered about including – but ultimately decided to regulate – Kick, Twitch, YouTube and Reddit, which slammed what it calls a ‘legally erroneous’ ban and is preparing to sue.

Discord, GitHub, Google Classroom, Lego Play, Messenger, Pinterest, Roblox, Steam, WhatsApp and YouTube Kids are exempt from the SMMA rules, while other sites – notably Twitter clone BlueSky – have introduced age verification even though they don’t yet have to.

Will I regain access to my account at 16?

eSafety has described the new rules as a ‘delay’ rather than a ‘ban’, implying that accounts should be there and waiting for you when you turn 16.

Whether that actually happens, however, depends on the service: Meta warned users it would delete their content, while Reddit has said it will simply freeze accounts until users are confirmed to be least 16, and will let younger users download their content if they want.

If you’ve been blocked, don’t count on your content being available when you turn 16; hopefully you, your older friends and followers, or even your parents will have saved some of it.

Is all of this really going to work?

Australia may be first to ban under-16s from social media but it’s far from the only government demanding age verification: the UK, as well as several EU countries and US states including Mississippi, South Dakota, Wyoming, and Ohio, are all following suit.

With so many questions still unanswered about how the ban will work and what its long-term effects will be, the policy marks the beginning of a years-long social change that could, legal challenges aside, reverse years of online harm.

However effective the ban is or isn’t, “we recognise the intent to protect under-16s from online harms,” ACS president Helen McHugh said, adding that “personal experience has shown how easily young people can be drawn into toxic online environments.”

“The internet can connect children to people and influences that parents may not be aware of or able to control,” she continued.

“We believe technology companies have the responsibility to act ethically and protect their users from harms that may occur on their platforms.”

Visit eSafety’s official hub for the latest information on the social media age restrictions, or attend one of numerous upcoming webinars.