In a significant move aimed at safeguarding the mental health of young users, Instagram will soon notify parents if their teenage children repeatedly search for terms related to self-harm or suicide. This proactive measure, announced by parent company Meta, represents a departure from its previous approach, which focused on blocking harmful content and directing users to external support without parental notification. Beginning next week, parents of teens in the UK, US, Australia, and Canada who use Instagram’s Teen Accounts will receive these alerts, with a global rollout expected to follow.
A New Approach to Online Safety
Meta’s new policy aims to empower parents by keeping them informed about their children’s online behaviours. Alerts will be triggered if a teen’s search patterns indicate a concerning trend, allowing parents to engage in crucial conversations about mental health. The notifications will come through various channels, including email, text, WhatsApp, or directly on the app, depending on the contact information provided by families.
However, the announcement has drawn criticism from mental health advocates, including the Molly Rose Foundation, which was established in memory of Molly Russell, a 14-year-old who tragically took her own life in 2017 after encountering harmful content online. Andy Burrows, the foundation’s chief executive, expressed concerns that these alerts might cause unnecessary panic among parents, leaving them unprepared to handle the sensitive discussions that would follow.
Concerns from Mental Health Experts
While the initiative has been welcomed by some, others argue it does not adequately address the underlying issues. Ged Flynn, the chief executive of Papyrus Prevention of Young Suicide, emphasised that the real challenge lies in preventing children from accessing harmful content in the first place. “Parents are not looking for alerts after the fact; they want to be proactive and prevent their children from encountering dangerous material online,” he stated.

Burrows further pointed out that prior research by the Molly Rose Foundation suggested Instagram still actively recommends harmful content, putting vulnerable users at risk. He argued that the emphasis should be on addressing these systemic issues rather than shifting the responsibility to parents through notifications.
Informed Guidance for Parents
Meta has stated that along with alerts, it will provide expert resources to help parents navigate these difficult conversations. Sameer Hinduja, co-director of the Cyberbullying Research Center, acknowledged the distress such alerts may cause but highlighted the importance of accompanying information to guide parents on how to respond effectively. “Simply notifying a parent of a potential issue is not enough; the support provided afterwards is crucial,” he remarked.
In addition to the alerts for searches, Instagram plans to implement similar notifications for conversations teens have with an AI chatbot regarding self-harm and suicide. This move reflects a broader trend among social media platforms to enhance safety measures for younger users as scrutiny from regulators continues to intensify.
Regulatory Pressures on Social Media
As governments worldwide demand greater accountability from social media companies, Instagram’s new measures come amidst a backdrop of increasing legal and regulatory pressures. Australia has recently enacted a ban on social media use for individuals under 16, with other nations like Spain, France, and the UK contemplating similar legislation. This scrutiny has compelled platforms like Meta to reassess their practices concerning the safety of young users.

Meta’s leaders, including Mark Zuckerberg and Instagram chief Adam Mosseri, have faced questions in court regarding the company’s targeted practices towards younger audiences, highlighting the growing concern about the impact of social media on mental health.
Why it Matters
The introduction of alerts for parents marks a pivotal step in the ongoing struggle to protect young users from the dangers of online content. While the initiative seeks to enhance parental engagement, it raises critical questions about the effectiveness of such measures in addressing the root causes of mental health issues among adolescents. As social media continues to play a central role in the lives of young people, it is imperative that both technology companies and society at large work collaboratively to create safer online environments that prioritise the well-being of their most vulnerable users.