Australia just flipped the switch on the world’s first social media ban for anyone under 16. As of December 10, 2025, platforms like Instagram, TikTok, Facebook, X, and Reddit face $32 million fines if they don’t kick out underage users. The government says it’s protecting kids from “predatory algorithms.” Privacy experts call it surveillance theater. And within hours of enforcement, teens were already bypassing it by drawing on facial hair and firing up VPNs.
This isn’t child protection. It’s a privacy nightmare that won’t work, can’t be enforced, and sets a dangerous precedent for government control of the internet.
Already Broken: When a Sharpie Defeats Your Policy
The ban is technically unenforceable and failing immediately upon launch. Within hours of the December 10 enforcement, local reports confirmed what security experts predicted: many children had already bypassed the ban.
The methods? Hilariously low-tech. Teens are fooling AI age verification by drawing on facial hair with makeup or using older siblings’ faces during video selfies. Others are using VPNs to make it appear they’re accessing platforms from outside Australia. When Florida implemented a similar law earlier this year, VPN demand surged by 1,150%. Australian teens aren’t any less savvy.
The Australian government isn’t even pretending this will work perfectly. They’ve openly conceded the ban will be “far from perfect at the outset, and canny teenagers will find ways to circumvent it.” Tech companies, including Google, warned the policy would be “extremely difficult to enforce.” Government-commissioned reports pointed to “inaccuracies in age-verification technology.”
When your enforcement mechanism can be defeated by a Sharpie and a $5/month VPN subscription, you don’t have a policy. You have security theater.
The Privacy Cost: Surveilling Everyone to Stop No One
To enforce this ban, platforms must verify the age of every user – not just teenagers. That means one of two options: AI facial age estimation (85-90% accurate) or government ID uploads (98% accurate but requiring everyone to hand over sensitive documents).
Both options are privacy nightmares. AI facial analysis requires collecting biometric data from all users. Document verification means creating centralized databases of government IDs – in a country that’s suffered multiple major security breaches in recent years.
Privacy experts are blunt about the risks. “Age verification requires collecting sensitive data, including government IDs, biometrics, creating risk in terms of hackers,” one specialist noted. “It is also normalising surveillance for young people.”
Let’s do the math. To keep out roughly 10% of users (under-16s), you must surveil 100% of users. That’s a privacy-to-protection ratio that doesn’t make sense – especially when the protection part demonstrably doesn’t work.
Critics warn this is just the beginning. “Any measure that seeks to monitor online activity is the beginning of tighter surveillance,” they argue. Once you normalize requiring government-issued ID to access social media, that infrastructure doesn’t go away. It expands.
When 15-Year-Olds Have to Sue for Free Speech
This isn’t just privacy advocates complaining. Two 15-year-olds – Noah Jones and Macy Neyland – are suing the Australian government through the Digital Freedom Project. Their argument? The ban “robs” young Australians of their freedom of political communication, an implied constitutional right.
The legal basis is straightforward: Australia’s Constitution requires that parliamentarians be “chosen” by the people. Without freedom to communicate about political matters, that choice isn’t meaningful. The ban restricts that freedom for anyone under 16.
YouTube has also threatened a High Court challenge on the grounds that the ban burdens political communication.
The government points to the 77% public support for the ban. But constitutional rights aren’t subject to popular vote. When 15-year-olds have to sue their government to exercise basic political communication rights, maybe the problem isn’t the teenagers.
What Should Actually Happen
There are evidence-based solutions that work better than bans: digital literacy education, trust-based parenting, and platform accountability for algorithm design.
Research shows 85% of parents believe schools should teach safe social media use and digital literacy. A study of 4,000 teens found that 65% with positive parental communication had healthier relationships with technology, greater well-being, and better body image. As one teenager put it: “Perhaps teens should be exposed to media literacy in the same manner and time allotment that they learn driver’s or reproductive education.”
But the real solution is platform accountability. The root problem isn’t that kids have access to social media. It’s that platforms design predatory algorithms optimized for engagement at any cost – including mental health. Instead of banning access, regulate the algorithms. Force platforms to build “safe-by-design” products through duty-of-care legislation.
You don’t solve algorithmic harm by banning kids from platforms. You solve it by forcing platforms to stop designing harmful algorithms. But that would require regulating billion-dollar tech companies, which is apparently harder than surveilling teenagers.
The Global Domino Effect
Australia calls itself the “world’s first domino” in a global movement toward social media age restrictions. The UK is implementing its Online Safety Act with age verification provisions. France is considering similar measures. In the US, multiple states have proposals pending, and Senator Brian Schatz has introduced the “Kids Off Social Media Act.”
If every country follows Australia’s model, you’ll need government-issued ID verification to access the internet. That’s not a safer internet – that’s a controlled internet.
The 224 comments on the Hacker News discussion of this policy show the global tech community is paying close attention. They understand what’s at stake: once you build the infrastructure for age-verified internet access, that infrastructure becomes permanent. It gets repurposed. It expands.
The Choice We’re Making
Australia’s ban is well-intentioned but fundamentally flawed. It’s technically unenforceable (VPNs and facial recognition bypass prove that). It’s a privacy nightmare (surveilling everyone to maybe stop some kids). It may be unconstitutional (teens are suing). And it ignores evidence-based solutions that actually work.
Most dangerously, it normalizes the idea that accessing the internet requires government permission. That’s a precedent we can’t walk back.
The real question isn’t whether Australia’s ban protects children. The evidence already shows it doesn’t. The question is whether we’re willing to sacrifice everyone’s privacy and freedom to maintain the illusion that it does.





