Big Tech Faces Scrutiny Over Compliance with Australia’s Under-16 Social Media Ban

Ryan Patel, Tech Industry Reporter
5 Min Read
⏱️ 4 min read

The Australian government is intensifying its scrutiny of major tech companies, including Meta, TikTok, and Google, amid allegations of non-compliance with the nation’s recent ban on social media usage for individuals under the age of 16. This investigation follows a survey indicating that a significant number of minors still maintain accounts on popular platforms like Instagram and Snapchat despite the new regulations.

The Allegations and Government Response

The eSafety Commission’s findings reveal a concerning trend: approximately 70% of under-16s with accounts on platforms such as Instagram, Snapchat, and TikTok have managed to retain access since the ban was enacted in December 2025. A survey of nearly 900 Australian parents showed that 31% reported their children still had social media accounts, a stark decline from the near 50% prevalence prior to the ban.

Communications Minister Anika Wells expressed her frustration with the tech giants, claiming they are failing to adequately enforce the new age restrictions. “The technology is there, and these companies have the resources to implement effective measures,” she stated in Canberra. “If they wish to operate in Australia, compliance with our laws is non-negotiable.”

Under the new legislation, platforms such as Facebook, Instagram, and TikTok are classified as “age-restricted”, prohibiting under-16 users from having accounts and obliging companies to take reasonable measures to prevent minors from signing up. The penalties for non-compliance could be severe, with fines reaching up to A$49.5 million (approximately US$33.9 million). Wells confirmed that the eSafety office is currently gathering evidence to determine if fines will be levied against any of the implicated companies.

The government’s optimism about the effectiveness of the ban was initially high, with reports indicating that over 4.7 million accounts had been deactivated or restricted shortly after the new rules came into effect. However, ongoing anecdotal evidence suggests that many children have continued to slip through the cracks, raising questions about the actual impact of the legislation.

Efficacy of Age Verification Technologies

The ongoing investigation has illuminated potential shortcomings in the age verification methods employed by social media companies. The eSafety Commission has reported concerning practices, including the use of facial age estimation technology that has proven unreliable for those near the age threshold. Many platforms allow users to repeatedly attempt age verification, which raises significant doubts about the integrity of the process.

Meta has publicly stated its commitment to adhering to the new regulations while also highlighting the challenges associated with accurately determining age online. In their defence, the company has suggested that a more robust approach would involve requiring parental consent and age verification at the app store level, rather than relying solely on the platforms themselves.

The Public Reaction and Future Implications

The Australian public’s perception of the new ban is mixed. While many parents have welcomed the initiative, the survey results have sparked concerns about its effectiveness. As the government works to bolster enforcement, it faces the daunting task of ensuring that tech giants not only comply with the law but also take meaningful steps to protect vulnerable users.

The conversation around this issue is not merely confined to Australia. The implications of these regulatory measures could resonate globally, as governments worldwide grapple with the challenges of safeguarding minors in an increasingly digital landscape.

Why it Matters

This situation underscores a critical inflection point in the relationship between governments and technology companies. As digital platforms continue to wield immense influence over young users, the responsibility to protect children from potential harm becomes paramount. The outcome of this investigation could set a precedent for future regulations worldwide, shaping the landscape of social media governance and user safety for years to come.

Share This Article
Ryan Patel reports on the technology industry with a focus on startups, venture capital, and tech business models. A former tech entrepreneur himself, he brings unique insights into the challenges facing digital companies. His coverage of tech layoffs, company culture, and industry trends has made him a trusted voice in the UK tech community.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2026 The Update Desk. All rights reserved.
Terms of Service Privacy Policy