**
In the wake of a tragic school shooting in Tumbler Ridge, British Columbia, where 18-year-old Jesse Van Rootselaar took the lives of six individuals, including five students and a teacher’s aide, questions are emerging about OpenAI’s actions prior to the incident. A disturbing revelation indicates that the company’s ChatGPT account linked to the shooter had been suspended months earlier due to concerning content, yet this information was not communicated to law enforcement.
A Meeting of Consequence
The day after the February 10 shooting, representatives from OpenAI had a pre-scheduled meeting with the B.C. government to discuss the potential establishment of a satellite office in Canada. On February 12, just two days later, OpenAI reportedly sought assistance in contacting the Royal Canadian Mounted Police (RCMP). This timing raises eyebrows, as the tech giant had previously flagged the shooter’s account for violent content but failed to alert authorities about the risk.
Premier David Eby expressed his alarm, stating, “Reports that allege OpenAI had related intelligence before the shootings in Tumbler Ridge took place are profoundly disturbing for the victims’ families and all British Columbians.” He emphasised the need for a thorough investigation into the events leading up to this tragic incident.
OpenAI’s Defence and Controversies
OpenAI confirmed that Van Rootselaar’s account was suspended in June due to posts that triggered automatic screening systems. However, the company maintained that these posts did not indicate an “imminent and credible risk of serious physical harm to others” and therefore did not warrant a referral to law enforcement at that time.

This reasoning has not sat well with many, including Federal AI Minister Evan Solomon, who called the lack of timely reporting “deeply disturbing.” He urged OpenAI and other tech firms to reassess their safety protocols to protect public safety and ensure children are safeguarded from potential harm.
In the aftermath, OpenAI announced it would review its processes to enhance safety measures, yet scepticism remains. Critics argue that the company should have taken more proactive steps to inform authorities given the nature of the flagged content.
Legal and Ethical Implications
The incident has catalysed discussions around the responsibilities of AI companies in monitoring and reporting concerning user behaviour. Jay Edelson, a U.S. lawyer representing families who have suffered losses linked to OpenAI’s chatbot interactions, pointed out that this is not an isolated incident. He noted that the company has previously faced allegations of failing to alert authorities in similar cases involving discussions of violence and self-harm.
As scrutiny mounts, experts like Taylor Owen, an associate professor at McGill University, advocate for legislation that specifically addresses the responsibilities of AI platforms. Owen argues that the risks posed by these technologies must be acknowledged and managed adequately.
Community Impact and Response
In Tumbler Ridge, the community is grappling with the aftermath of the shooting. Local authorities, including the RCMP, are undertaking a comprehensive review of the shooter’s digital footprint, including social media interactions and online activities. Community leaders are working collaboratively to ensure safety and support for those affected.

Candice Alder, a psychotherapist and AI ethics consultant, warned against over-reliance on AI systems for public safety. She argued that these platforms are not substitutes for professional mental health services and cautioned against the dangers of normalising surveillance of online speech.
Why it Matters
The Tumbler Ridge tragedy underscores the urgent need for a robust dialogue surrounding the intersection of technology, public safety, and mental health. As our reliance on AI continues to grow, so too does the imperative for accountability in how these systems are monitored and managed. The implications of this incident stretch far beyond British Columbia, prompting a re-examination of the responsibilities borne by tech companies in safeguarding communities. As we seek answers, it is crucial that we ensure such tragedies are not repeated, prioritising the safety and well-being of our children and communities above all else.