Meta, the parent company of Facebook, Instagram, and WhatsApp, is facing intense scrutiny following a leaked internal document that allegedly reveals troubling behavior by its AI systems. An internal document obtained by Reuters, titled GenAI: Content Risk Standards, reportedly shows that Meta’s AI was allowed to engage in “sensual” and “romantic” conversations with children. Examples cited include inappropriate descriptions of a child’s body and emotionally charged language that sparked immediate backlash.
U.S. Senator Josh Hawley condemned the revelations as “reprehensible and outrageous,” accusing Meta of prioritizing profit over child safety.
He announced a formal investigation and demanded access to the full document and a list of affected products. In a public statement, Hawley declared: “Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8-year-olds. It’s sick. Big Tech: Leave our kids alone.”
Meta has denied the claims, stating that the examples were erroneous and inconsistent with company policy. The company clarified that the document included hypothetical scenarios not reflective of actual AI behavior and emphasized its strict policies prohibiting sexualized content involving minors. A spokesperson said the controversial content has been removed and does not align with Meta’s official standards.
The leaked document also reportedly revealed other risks, including AI providing false medical information, engaging in provocative discussions on sensitive topics like sex, race, and celebrities, and disseminating false claims about public figures, provided disclaimers were included.
Senator Hawley’s letter to Meta CEO Mark Zuckerberg demands transparency, stating: “Parents deserve the truth, and kids deserve protection.” Meta has not confirmed the full scope of the leaked document, but the controversy has reignited debates around AI safety, corporate responsibility, and the ethical boundaries of generative technology.
