OpenAI’s Handling of Personal Data Under Scrutiny: Canadian Regulators Unveil Findings

Chloe Henderson, National News Reporter (Vancouver)
5 Min Read
⏱️ 4 min read

In a comprehensive three-year investigation, Canadian privacy authorities have determined that OpenAI breached privacy laws during the rollout of ChatGPT. The report, released on Wednesday, highlights significant concerns regarding the collection and management of personal information, but also notes that the company has taken steps to rectify these issues.

Findings of the Investigation

The Office of the Privacy Commissioner of Canada, alongside provincial counterparts from Quebec, Alberta, and British Columbia, initiated the inquiry after receiving a complaint in April 2023. The investigation unveiled that OpenAI amassed extensive personal data without proper safeguards or informed consent from users. Many individuals were reportedly oblivious to the fact that their information was being utilised to train AI models.

The regulators expressed particular discontent over OpenAI’s failure to provide Canadians with a straightforward mechanism to amend or delete their personal data. Additionally, the report pointed out that the launch of ChatGPT proceeded without adequately addressing known privacy risks. OpenAI’s lack of transparency regarding inaccuracies in ChatGPT’s responses was also a critical concern.

Changes Implemented by OpenAI

Since the investigation began, OpenAI has made notable changes to its practices. The company has introduced filters designed to identify and obscure personal information, as well as technical measures to prevent ChatGPT from disclosing sensitive details about public figures. Furthermore, OpenAI has established a formal policy for data retention and deletion.

Looking ahead, the company has committed to enhancing user awareness, particularly for those accessing the web version of ChatGPT while signed out. Users will now be duly informed that their conversations may contribute to training future AI models, and they will be cautioned against sharing sensitive information.

Philippe Dufresne, Canada’s Privacy Commissioner, expressed optimism regarding these developments. “I’ve concluded that the measures that have been and will be implemented by OpenAI will address the concerns identified during the investigation,” he stated during a press conference.

Regulatory Response and Future Implications

Despite the findings, Quebec’s privacy authority decided against imposing financial penalties on OpenAI, opting instead to issue recommendations for improvement. Naomi Ayotte, vice-president at the Commission d’accès à l’information du Québec, stated, “We have decided to make recommendations instead,” emphasising a cooperative approach to resolving privacy concerns.

Legal experts have highlighted the significance of the report in advancing privacy protections. Teresa Scassa, a law professor at the University of Ottawa, noted, “There is real value in this in the sense that it helps to advance privacy protection, with co-operation and engagement from industry.”

The investigation also underscores the evolving nature of AI technology and its implications for privacy regulation. Michael Geist, another law professor at the University of Ottawa, remarked that the report indicates policymakers are struggling to keep pace with technological advancements.

The Need for Updated Privacy Framework

The inquiry has reignited discussions around the necessity of modernising Canada’s privacy legislation. The Liberal government introduced a new privacy and data bill in 2022, but it was stalled when Parliament was prorogued in January 2025. No new version has been presented yet, with federal AI Minister Evan Solomon affirming the government’s commitment to developing a comprehensive privacy framework that aligns with the rapid evolution of technology.

Concerns remain regarding the acquisition of valid consent for data collection, particularly when AI companies utilise publicly available information from the internet. Emily Laidlaw, an associate law professor at the University of Calgary, argued that the focus should shift from explicit individual consent to establishing robust principles and accountability measures for AI usage.

Diane McLeod, Alberta’s Information and Privacy Commissioner, reiterated the need for expanded oversight, including monetary penalties and mandatory assessments prior to the deployment of new technologies. Such measures would ensure that privacy protections remain intact while fostering innovation.

Why it Matters

The findings of this investigation are pivotal not only for OpenAI but for the broader landscape of AI development and usage in Canada. As technology continues to advance at a breathtaking pace, the need for a robust privacy framework becomes increasingly urgent. This case serves as a crucial reminder of the delicate balance between innovation and privacy rights, underscoring the importance of regulatory vigilance in a rapidly changing digital world. The steps taken by OpenAI following the investigation could set a precedent for other tech companies, shaping how personal data is handled in the age of artificial intelligence.

Share This Article
Reporting on breaking news and social issues across Western Canada.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2026 The Update Desk. All rights reserved.
Terms of Service Privacy Policy