The UK’s Information Commissioner’s Office (ICO) has initiated an inquiry into Meta following alarming reports that outsourced workers engaged in training the company’s AI smart glasses have been exposed to deeply private footage, including intimate moments and personal activities. This unsettling revelation raises significant questions about data protection and user privacy in the rapidly evolving landscape of augmented reality technologies.
Disturbing Allegations of Privacy Violations
According to reports from Swedish publications Svenska Dagbladet (SvD) and Göteborgs-Posten (GP), subcontractors in Kenya have claimed that they have been required to review videos captured by Meta’s smart glasses. This footage reportedly includes individuals engaged in sexual activities, using the toilet, and undressing, all without the consent of those being recorded. The workers, who spoke on the condition of anonymity, expressed their disbelief that users were aware of the recording capabilities of the devices.
In its UK AI terms of service, Meta acknowledges that it may review user interactions with its AI systems, stating, “In some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review may be automated or manual (human).” However, the company has not publicly addressed the specific allegations regarding the review of sensitive personal footage.
ICO’s Response and Regulatory Implications
The ICO has responded to these concerning claims by reaching out to Meta for clarification on how the company is addressing its data protection obligations. The regulator emphasised that service providers must be transparent about the data they collect and how it is utilised, especially when it involves personal data processed through smart devices.

A spokesperson for the ICO remarked, “Devices processing personal data, including smart glasses, should put users in control and provide appropriate transparency. This includes where user data is used to train or develop AI systems.” The ICO’s guidance underscores the necessity for manufacturers in the smart technology sector, including IoT devices, to adhere to strict data protection standards.
The Broader Implications for Privacy and Safety
Experts have previously raised concerns about Meta’s plans to incorporate AI-driven facial recognition features into its smart glasses. Such advancements could pose serious risks to vulnerable individuals, particularly women, by allowing wearers to identify and gather information about people in real-time, potentially enabling stalking and harassment.
Earlier reports have highlighted fears among women that the covert nature of the glasses could lead to violations of privacy, with some alleging that they have been recorded without their knowledge. This latest revelation only intensifies those concerns, suggesting that the intersection of technology and personal privacy is fraught with peril.
Why it Matters
As the integration of advanced technologies into daily life accelerates, issues surrounding privacy and data protection take centre stage. The allegations against Meta not only underscore the potential for misuse of emerging technologies but also highlight the urgent need for robust regulatory frameworks to safeguard individual rights. As consumers become increasingly aware of their digital footprints, the onus is on corporations to ensure transparency and accountability in their operations. The ICO’s probing into Meta’s practices could signify a pivotal moment in the ongoing dialogue about privacy, data ethics, and consumer trust in an age dominated by digital innovation.
