Big Tech Under Scrutiny for Breaching Australia’s Social Media Age Restrictions

Ryan Patel, Tech Industry Reporter
6 Min Read
⏱️ 4 min read

In a significant development, major technology firms Meta, TikTok, and Google are facing scrutiny from the Australian government for allegedly flouting a newly implemented ban that prohibits individuals under the age of 16 from using social media platforms. A recent survey has revealed that a substantial number of children continue to maintain accounts on popular platforms like Instagram, Snapchat, and TikTok, sparking concerns over the effectiveness of age verification measures in place.

The Ban’s Implementation and Initial Findings

Australia’s stringent social media regulations, which came into effect on 10 December 2025, explicitly restrict users under the age of 16 from holding accounts on platforms deemed “age-restricted.” The laws require these companies to take reasonable steps to prevent minors from accessing their services. However, a survey conducted among nearly 900 Australian parents indicates that compliance is lacking; approximately 31% reported that their children still had social media accounts after the ban, a notable decline from 49% prior to its enactment.

The eSafety Commission has pointed out that nearly 70% of under-16s who had accounts on Instagram, Snapchat, or TikTok before the ban have managed to retain access. Communications Minister Anika Wells has expressed frustration over the apparent ineffectiveness of the measures being employed by these tech giants, stating that the technology for age verification, such as facial recognition, is inadequate and that companies are not adequately enforcing the law.

Government’s Stance on Compliance

On Tuesday, it was disclosed that platforms including Instagram, Facebook, Snapchat, TikTok, and YouTube are under investigation for potential non-compliance, with Minister Wells emphasising that businesses wishing to operate in Australia must adhere to local laws. She highlighted the need for stronger action, noting that the current practices observed in the industry reflect “the absolute bare minimum” expected from these corporations.

“None of this is impossible. None of this is even difficult for big tech, who are innovative billion-dollar companies. What this update shows is unacceptable,” Wells declared in a statement made in Canberra. The laws carry stringent penalties, with fines reaching up to A$49.5 million (approximately US$33.9 million or £25.7 million), although the government is still in the process of gathering evidence before deciding whether to impose such penalties.

Industry Response and Challenges Ahead

In response to the allegations, Meta has reiterated its commitment to comply with the new regulations and collaborate with the eSafety Commission. The company acknowledged the complexities involved in accurately determining age online, especially at the critical age-16 threshold. “The most effective, privacy-protective and consistent approach is to require robust age verification and parental approval at the app store and operating system level before a teen can download an app or create an account,” a spokesperson from Meta stated.

Both TikTok and Google were contacted for comment but did not provide a response by the time of publication. Meanwhile, the Australian government has touted the early success of the ban, claiming that over 4.7 million social media accounts were deactivated or restricted shortly after its implementation. However, anecdotal evidence suggests that many children remain online, raising serious questions about the efficacy of the policy.

The Path Forward

The eSafety Commission’s first compliance report revealed that despite an overall decrease in account ownership among under-16s, a significant number still have access to age-restricted platforms. The report indicated that many children retained their accounts because they had not yet been prompted to verify their age. Furthermore, it raised concerns about various “poor practices” employed by these platforms, including allowing minors to attempt age verification multiple times or failing to provide clear pathways for reporting underage users.

The report also highlighted the limitations of facial age estimation technology, especially for users close to the 16-year-old boundary, which has led to many younger users being misclassified as older. This situation underscores the ongoing challenges faced by both regulators and tech companies in creating a safe online environment for children.

Why it Matters

The ongoing investigation into the compliance of major tech firms with Australia’s age restrictions is emblematic of a global struggle to safeguard minors in the digital landscape. As social media continues to evolve and permeate everyday life, the responsibility of these platforms to protect younger users becomes increasingly critical. The outcome of this inquiry could set important precedents for future regulations, influencing not just Australia but potentially offering a framework for other nations grappling with similar issues. The effectiveness of these measures will ultimately hinge on the commitment of tech giants to prioritise user safety over profit, a balance that remains precariously tilted in favour of the latter.

Share This Article
Ryan Patel reports on the technology industry with a focus on startups, venture capital, and tech business models. A former tech entrepreneur himself, he brings unique insights into the challenges facing digital companies. His coverage of tech layoffs, company culture, and industry trends has made him a trusted voice in the UK tech community.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2026 The Update Desk. All rights reserved.
Terms of Service Privacy Policy