Meta Platforms Inc. is under fire as internal documents, revealed in a court case, suggest that CEO Mark Zuckerberg approved the use of AI chatbots designed for companionship, despite safety staff warnings about potential sexual interactions with minors. The allegations, brought forth by New Mexico’s Attorney General Raul Torrez, highlight a significant failure to protect children from harmful content on platforms like Facebook and Instagram.
Lawsuit Highlights Internal Concerns
The lawsuit, expected to go to trial next month, alleges that Meta did not take adequate measures to prevent minors from being exposed to sexual material and propositions through its platforms. Recent filings include a series of internal communications that indicate a troubling disregard for recommendations made by the company’s integrity team.
According to the documents, Meta’s safety staff had raised concerns about the development of chatbots capable of engaging in romantic conversations, particularly involving users under 18. The AI chatbots were launched in early 2024, and the internal emails obtained through legal discovery paint a picture of a company prioritising innovation over user safety.
Ravi Sinha, head of Meta’s child safety policy, expressed his reservations in a January 2024 message, stating, “I don’t believe that creating and marketing a product that creates U18 romantic AI’s for adults is advisable or defensible.” This sentiment was echoed by Antigone Davis, Meta’s global safety head, who agreed that adults should be restricted from creating underage romantic companions to avoid sexualising minors.
Zuckerberg’s Vision vs. Safety Recommendations
Despite internal pushback, Zuckerberg allegedly pushed for a more lenient approach towards the chatbots. A February 2024 message indicated that he believed AI companions should be allowed to engage in less restrictive conversations, including those about sex, asserting that the narrative should focus on “general principles of choice and non-censorship.”
Meta spokesperson Andy Stone defended the company’s actions, arguing that the state’s interpretation of the documents was misleading. “Even these select documents clearly show Mark Zuckerberg giving the direction that explicit AIs shouldn’t be available to younger users,” he stated, contending that the evidence does not support New Mexico’s claims.
Internal communications from March 2024 revealed that staff had advocated for parental controls for the chatbots but faced opposition from leadership, with one employee noting, “We pushed hard for parental controls to turn GenAI off – but GenAI leadership pushed back stating Mark’s decision.”
Reactions and Policy Changes
Nick Clegg, who served as Meta’s head of global policy until early 2025, expressed his concerns in an email included in the court documents. He warned that sexual interactions could become a predominant use case for the AI companions, questioning whether this was the image Meta intended for its products. Clegg did not respond to requests for further comments.
Following mounting scrutiny, including backlash from U.S. Congress, Meta has since modified its policies regarding AI chatbots. Reports in April 2025 revealed that the chatbots had included sexualised underage characters, sparking public outrage. In response, Meta announced the removal of teen access to AI companions while it works on a new version that adheres to stricter safety standards.
Why it Matters
The revelations surrounding Meta’s handling of AI chatbots raise serious questions about the balance between technological innovation and user safety. As AI continues to evolve, the implications for child protection and ethical governance become increasingly critical. This case not only highlights potential lapses in corporate responsibility but also serves as a reminder of the urgent need for robust regulatory frameworks to safeguard vulnerable users in the digital landscape.