In a harrowing revelation, OpenAI disclosed that it had flagged the account of Jesse Van Rootselaar, the suspect in a tragic school shooting in British Columbia, for “furtherance of violent activities” just months before the incident. This unfortunate circumstance raises serious questions about the thresholds for action by tech companies when it comes to potentially dangerous users.
A Troubling Timeline
OpenAI identified Van Rootselaar’s account in June 2025, initiating internal discussions about whether to alert the Royal Canadian Mounted Police (RCMP). However, the tech giant ultimately decided against making a referral, citing that the user’s actions did not pose an imminent or credible risk of serious harm. This decision was made despite the account being flagged due to concerning behaviour.
Fast forward to February 2026, and the situation took a devastating turn. Van Rootselaar, only 18 years old, committed one of the worst school shootings in Canadian history, killing eight individuals, including a teaching assistant and five students aged just 12 to 13. The gunman later died from a self-inflicted gunshot wound.
OpenAI’s Response
Following the shocking events, OpenAI acted swiftly to assist law enforcement. The company reached out to the RCMP with details about Van Rootselaar’s use of their AI platform, ChatGPT, stating, “Our thoughts are with everyone affected by the Tumbler Ridge tragedy. We proactively reached out to the Royal Canadian Mounted Police with information about the individual and their use of ChatGPT, and we’ll continue to support their investigation.”
This proactive approach, though commendable, raises concerns about the effectiveness of their initial judgment. The threshold for reporting appears alarmingly high, especially in light of the catastrophic outcomes that ensued.
The Broader Implications
Van Rootselaar’s case is not one of isolated concern. It highlights a critical gap in the intersection of technology and public safety. The RCMP reported that Van Rootselaar had a history of mental health issues and previous encounters with law enforcement, yet the motive behind the tragic shooting remains unclear. This ambiguity complicates the already challenging task of predicting violent behaviour based on online activities.
The community of Tumbler Ridge, a small town of 2,700 nestled in the Canadian Rockies, is now grappling with the aftermath of this senseless violence. The town, located over 1,000 km northeast of Vancouver, is mourning its dead and seeking answers in the chaos that has ensued.
Why it Matters
This incident serves as a wake-up call for technology companies and law enforcement alike. The ability to identify potentially harmful behaviour online is becoming increasingly crucial in an age where social media and digital platforms play a significant role in shaping interactions. OpenAI’s situation underscores the pressing need for systems that can better assess risk and ensure timely intervention. If tech companies do not prioritise user safety and refine their protocols for flagging dangerous behaviours, we may face more tragic outcomes in the future. The responsibility lies heavily on both the tech industry and society to create safer environments for all.
