**
In a significant development following the devastating mass shooting in Tumbler Ridge, British Columbia, Sam Altman, the CEO of OpenAI, is set to express remorse to the families affected by the tragedy. Premier David Eby announced that Altman will address the community after a video call on Thursday, where discussions highlighted the controversial role of OpenAI’s ChatGPT platform in the lead-up to the February 10 incident.
Context of the Tragedy
The meeting between Altman, Premier Eby, and Tumbler Ridge Mayor Darryl Krakowka revealed that the shooter, Jesse Van Rootselaar, had engaged in concerning conversations on the ChatGPT platform months prior to the incident. Despite these warning signs, OpenAI did not alert law enforcement. Eby remarked that OpenAI had a moral obligation to notify authorities, which could have potentially prevented the tragedy.
“I asked for the apology because OpenAI had the opportunity to notify authorities and potentially even to stop this tragedy from happening,” Eby stated. However, he acknowledged that there are broader societal issues at play, including the need for improved mental health resources and the accessibility of firearms in the home.
Investigative Developments
During the call, Eby refrained from delving into the specifics of the content discussed between Van Rootselaar and the AI platform, opting instead to respect the ongoing investigation led by the Royal Canadian Mounted Police (RCMP). “I made the very specific decision not to ask about the content of the chats with Mr. Altman. I don’t want to play any role in interfering with the criminal investigation that’s under way,” the Premier explained.

The RCMP has assured that they have issued preservation orders to all relevant social media and AI companies as part of their investigation, ensuring that any evidence related to the case is secured.
Calls for Regulatory Change
Eby insisted on meeting with Altman rather than lower-level executives, underscoring the gravity of the situation. He has also urged OpenAI to support the establishment of federal regulatory standards that would require AI companies to adhere to a “duty to report” concerning potential threats.
“The company agreed to make recommendations and to provide advocacy around federal regulatory standards. I don’t believe that OpenAI’s current standard is sufficient where there is an option to report,” Eby commented. He emphasised the necessity for uniform reporting standards across all AI service providers.
Government Engagement
In a related development, Canadian AI Minister Evan Solomon met with Altman on Wednesday to outline Ottawa’s expectations regarding AI accountability. Solomon highlighted the need for Canadian experts to evaluate flagged ChatGPT conversations, particularly those that suggest a risk of imminent harm.

Currently, Canada lacks comprehensive legislation governing AI, and there are no specific regulations that apply to chatbot platforms, unlike some other jurisdictions. Experts have suggested that forthcoming online safety legislation should encompass both chatbots and traditional social media networks.
A spokesperson for OpenAI did not respond to requests for comment regarding Altman’s planned apology.
Why it Matters
The Tumbler Ridge tragedy has ignited a crucial conversation about the responsibilities of AI companies in safeguarding the public. With the shocking details of the shooting highlighting potential failures in reporting and accountability, this incident may catalyse significant regulatory changes. As communities grapple with the aftermath, the implications for mental health support, weapon accessibility, and AI ethics are pressing and far-reaching, demanding immediate attention from both policymakers and tech leaders alike.