Meta Platforms, Inc. is currently embroiled in a legal battle in New Mexico, where the state’s attorney general has filed a lawsuit alleging that the company permitted minors to access artificial intelligence chatbots capable of engaging in sexual conversations. Internal communications from Meta employees, disclosed as part of the court proceedings, reveal that safety personnel had previously raised significant concerns about the implications of such access. The lawsuit is set for trial next month, highlighting ongoing scrutiny over the tech giant’s policies regarding child safety.
Allegations of Negligence in User Safety
The lawsuit, initiated by New Mexico Attorney General Raul Torrez, asserts that Meta has “failed to stem the tide of damaging sexual material and sexual propositions delivered to children” on its platforms, including Facebook and Instagram. The filing, made public on Monday, includes internal emails and messages from Meta staff that demonstrate a clear rift between company leadership and safety personnel regarding the development and deployment of AI companions designed for users, including minors.
Despite warnings from staff, internal documents suggest that Meta’s Chief Executive Mark Zuckerberg approved the rollout of these AI companions, which were launched in early 2024. The attorney general’s office contends that the company ignored the recommendations of its integrity team and opted against implementing adequate safeguards to protect younger users from potentially exploitative interactions.
Internal Concerns Over AI Companionship
The communications presented in the lawsuit reflect a troubling culture within Meta, where some employees expressed outright disapproval of creating chatbots intended for companionship that could facilitate sexual or romantic interactions with users. Ravi Sinha, who heads Meta’s child safety policy, articulated his concerns in a January 2024 message: “I don’t believe that creating and marketing a product that creates U18 romantic AIs for adults is advisable or defensible.”
In a subsequent exchange, Antigone Davis, Meta’s global head of safety, concurred with Sinha, emphasising the need to prevent adults from forming romantic connections with underage companions. Despite these internal objections, the leadership pursued directions that allowed for less stringent restrictions on adult interactions with AI chatbots, including discussions around sexual topics.
The Response from Meta Leadership
Andy Stone, a spokesperson for Meta, has countered the claims made in the lawsuit, asserting that the state’s interpretation of the internal documents is misleading and selectively curated. He stated, “Even these select documents clearly show Mark Zuckerberg giving the direction that explicit AIs shouldn’t be available to younger users.”
However, the documents suggest that Zuckerberg rejected the implementation of parental controls for these chatbots, allowing for the development of “Romance AI chatbots” for users under the age of 18. This approach has raised significant ethical questions, particularly regarding the potential for sexual interactions being the primary use case for teenage users.
Nick Clegg, who served as Meta’s head of global policy until early 2025, expressed concern about the consequences of sexualised AI companions. In a noted email, he questioned whether the company truly wanted its products to be associated with such controversial uses, particularly in light of the societal backlash that could ensue.
Policy Changes Amidst Backlash
In the wake of public scrutiny, Meta has recently announced a temporary suspension of teenage access to AI companions as it revises its policies. This decision follows revelations that its chatbots had included sexualised underage characters and engaged in inappropriate roleplay, prompting calls for reform both from within the company and from external stakeholders.
The legal proceedings and the ensuing media coverage have compelled Meta to confront the ethical implications of its AI technologies and their potential impacts on younger users.
Why it Matters
The outcome of this lawsuit could have significant ramifications for Meta and the broader technology sector, particularly in how companies handle user safety and the ethical implications of AI. As public awareness surrounding the protection of minors online continues to grow, the pressure is mounting for tech giants to establish responsible practices that prioritise user safety over profit. The scrutiny faced by Meta serves as a crucial reminder of the need for greater accountability and transparency in the development of AI technologies, especially those designed for vulnerable populations.