The UK’s Information Commissioner’s Office (ICO) has initiated an inquiry into Meta following alarming reports that private and intimate footage recorded by the tech giant’s AI-enabled smart glasses is being viewed by outsourced workers. This situation raises significant concerns about user privacy and the ethical implications of data handling in the burgeoning field of AI technology.
Allegations of Invasive Data Practices
Recent reports from Swedish publications Svenska Dagbladet and Goteborgs-Posten have unveiled disturbing claims that subcontractors in Kenya, tasked with training the smart glasses’ artificial intelligence, have been exposed to distressing content. This includes footage of individuals in private settings, such as engaging in sexual acts or using the toilet. The subcontractors, speaking on the condition of anonymity, expressed doubts that the individuals captured on video were aware they were being recorded.
Meta’s terms regarding AI usage state that the company may review user interactions with its AI systems, which could involve both automated and manual assessments of conversations and messages. However, the company has refrained from commenting on these specific allegations when approached for clarification.
ICO Takes Action
In response to these grave concerns, the ICO has reached out to Meta for details on how the company is fulfilling its obligations under UK data protection laws. The regulator emphasised that users must be informed about what data is collected and how it is utilised. An ICO spokesperson stated, “Devices processing personal data, including smart glasses, should put users in control and provide appropriate transparency.”

One subcontractor described the disturbing nature of the footage they encountered, stating, “We see everything – from living rooms to naked bodies.” Another added that explicit content was particularly sensitive, suggesting a potential breach of privacy rights.
The Broader Implications of Smart Technology
The ICO’s inquiry comes at a time when concerns over privacy and data security in relation to smart devices are escalating. Critics have long warned that the integration of AI features, such as facial recognition, into consumer technology could pose significant risks, particularly for vulnerable populations. Previous reports indicated that Meta’s plans to incorporate AI facial recognition could expose women and girls to increased dangers, enabling potential predators to exploit these technologies for harmful purposes.
Charities and experts have voiced strong objections, asserting that the covert nature of these smart devices might facilitate violations of personal privacy, undermining the safety of individuals who may be filmed without their consent.
Why it Matters
The implications of this unfolding situation extend far beyond the confines of corporate accountability. As technology continues to weave itself into the fabric of everyday life, the need for rigorous data protection and ethical standards becomes increasingly critical. The ICO’s scrutiny of Meta serves as a reminder of the delicate balance between innovation and individual privacy rights. As consumers, we must remain vigilant and demand transparency from tech companies, ensuring that our digital interactions do not come at the expense of our personal autonomy and safety.
