In a significant move, Sam Altman, CEO of OpenAI, has agreed to issue an apology to the families affected by the deadly school shooting in Tumbler Ridge, British Columbia, which occurred on February 10. This decision follows a recent discussion with B.C. Premier David Eby and Tumbler Ridge Mayor Darryl Krakowka, where the impact of the tragedy was addressed, particularly concerning the role of OpenAI’s ChatGPT platform.
OpenAI’s Missed Opportunity
During a video call that lasted approximately 30 minutes, Premier Eby expressed his concern over OpenAI’s failure to report concerning conversations between the shooter and the ChatGPT platform that took place months prior to the incident. Eby highlighted that OpenAI could have potentially alerted authorities and possibly prevented the tragedy.
“The company had the opportunity to notify authorities and potentially even to stop this tragedy from happening,” Eby remarked, acknowledging that while OpenAI bears some responsibility, the conversation must also consider broader societal issues, including mental health support and the accessibility of firearms within the shooter’s home.
Inquest and Future Regulations
In the wake of the incident, the B.C. chief coroner has announced plans to hold an inquest into the shooting. Premier Eby has been adamant about seeking accountability from OpenAI, insisting on a meeting with Altman rather than settling for discussions with lower-tier executives. Eby also urged for the establishment of federal regulatory standards that would impose a “duty to report” for AI companies, ensuring that similar scenarios are managed more effectively in the future.

OpenAI has indicated its willingness to provide recommendations and advocate for these regulatory standards. “I don’t believe that OpenAI’s current standard is sufficient where there is an option to report,” Eby stated, emphasising the need for uniform guidelines across AI companies that offer chatbot services.
Federal Demands and Expert Involvement
On February 14, federal AI Minister Evan Solomon met with Altman to outline the Canadian government’s expectations, which include the involvement of local experts in assessing flagged conversations on ChatGPT. Solomon underscored the importance of having mental health, legal, and privacy specialists evaluate discussions that may indicate a risk of imminent harm.
Currently, Canada lacks comprehensive AI legislation, particularly regulations specifically tailored for chatbots. This absence has raised alarms among experts who advocate for forthcoming online harms legislation to encompass chatbot interactions, ensuring a safer environment for users.
Legal and Ethical Concerns
The revelation that OpenAI did not alert Canadian authorities about alarming discussions held by the shooter, Jesse Van Rootselaar, has intensified scrutiny over how AI companies interact with law enforcement. Van Rootselaar, who fatally shot eight individuals—including six children under the age of 14—before taking her own life, had her account terminated for violating ChatGPT’s usage policy. However, OpenAI maintained that the conversations did not constitute “credible and imminent planning” for violence based on its previous policies.

A spokesperson for OpenAI was not available for comment regarding Altman’s forthcoming apology. Meanwhile, Solomon has reiterated the necessity for expert insights on such sensitive issues, though he did not confirm whether new regulations would be introduced to address the reporting obligations of AI firms.
Why it Matters
The tragic events in Tumbler Ridge highlight a pressing need for clearer guidelines governing AI interaction with law enforcement, particularly in instances where user behaviour may indicate a risk to public safety. As communities demand accountability from technology firms, this incident may serve as a catalyst for the establishment of robust regulations that ensure the responsible use of artificial intelligence, potentially preventing future tragedies. The dialogue between government, AI companies, and mental health experts is essential in navigating the complex landscape of technology and public safety, making it imperative to foster a proactive approach to these emerging challenges.