INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
INSIGHTS
OCEANIA NEWS
ESPORTS
Website with Preloader
Strict

Australia Sets 16+ Age Limit on Social Media in Major Online Safety Reform

Jul 30
4 min read

In a major shift aimed at protecting children from online harm, Australia has passed a groundbreaking law that bans anyone under the age of 16 from holding accounts on certain social media platforms. The Online Safety Amendment (Social Media Minimum Age) Act 2024, passed in November 2024, will come into effect in December 2025.

The legislation targets platforms including TikTok, Instagram, Facebook, Snapchat, Reddit, and after reversing a previous exemption, YouTube. The move is intended to shield younger users from dangers like cyberbullying, exposure to harmful content, and potential mental health issues.

What’s Covered, and What’s Not

Messaging services such as WhatsApp, as well as platforms tied to education like Google Classroom are exempt from the ban. Gaming platforms also fall outside the scope of the legislation.

Social media companies will be required to take "reasonable steps" to prevent underage access. Failure to comply could result in hefty fines up to A$49.5 million (roughly USD $32 million). Notably, the law doesn't impose penalties on individual users or families, and parental consent cannot be used to override the restriction.

Age Verification and Privacy Concerns

To enforce the law, platforms are experimenting with age verification tools such as facial recognition and third-party data checks. However, these methods have sparked privacy and accuracy concerns.

One example is X’s (formerly Twitter) ID verification feature, which requires users to upload a government-issued ID and a selfie for matching. Currently used for X Premium users, the system is powered by third-party verification company Au10tix and has drawn both interest and scrutiny for its potential use in age assurance.

X’s approach, if adopted more widely across platforms, could become part of the compliance toolkit used to meet Australia's legal requirements. However, concerns remain about data storage, the role of third-party processors, and accessibility for users without official identification.

Mixed Reactions and Human Rights Concerns

The law has drawn criticism from advocacy groups and legal experts. The Australian Human Rights Commission warns that the ban could infringe on children's rights to free expression, access to information, and online inclusion.

Digital rights advocates also question the law’s effectiveness, citing likely workarounds such as VPNs and fake profiles. Some argue the focus should be on making platforms safer rather than restricting access altogether.

How Kids Might Try to Bypass the Ban

Despite the new restrictions, many tech-savvy children will find ways around the rules. Common methods likely to emerge include:

  • Using Fake Birthdates: One of the most basic ways underage users bypass age restrictions is by simply lying about their age when creating accounts.
  • Borrowing Accounts: Children may use or "borrow" accounts from older siblings or friends to maintain access.
  • VPNs: With VPNs becoming more accessible, teens may use them to disguise their location or identity to avoid detection.
  • Avoiding Verification: If platforms do not strictly enforce age checks, some users may simply slip through without being flagged.

Unless the technology behind age assurance is robust and widely adopted, enforcement will rely heavily on platform cooperation and user honesty, two factors historically hard to guarantee online.

Public and Global Response

Despite some pushback, the law has strong public support. The eSafety Commissioner is currently developing detailed compliance guidelines, although the enforcement process remains vague.

Australia is not alone in its crackdown on underage access to social media. In July 2025, the United Kingdom passed new provisions under its Online Safety Act, introducing stricter age checks for users under 18 and requiring platforms to implement highly effective age assurance systems by 2026.

The UK law doesn’t ban children from social media outright, but it requires platforms to adjust content, features, and safety measures based on users’ ages, especially for those under 16.

Together, Australia and the UK now represent some of the toughest regulatory environments for youth access to social media, potentially setting the standard for other nations considering similar action.

As Australia’s December 2025 deadline approaches, the effectiveness and consequences of the new law both intended and unintended, will become clearer. The outcome could set a global precedent for how countries approach online child safety regulation.

Official Partner