A US senator has begun investigating Meta. A leaked internal document reportedly showed that the company’s artificial intelligence allowed “sensual” and “romantic” conversations with children.
Internal rules raise questions
Reuters reported the document was titled “GenAI: Content Risk Standards.” Republican Senator Josh Hawley called its contents “reprehensible and outrageous.” He demanded full access to the paper and the related product list.
Meta dismissed the accusations. A spokesperson said: “The examples and notes in question were erroneous and inconsistent with our policies.” They underlined that Meta enforced “clear rules” on chatbot responses. These rules “prohibit content that sexualizes children and sexualized role play between adults and minors.”
The company added that the document contained “hundreds of examples and annotations” where teams tested hypothetical scenarios.
Senator takes aim at Big Tech
Missouri senator Josh Hawley announced the probe on 15 August in a post on X. “Is there anything Big Tech won’t do for a quick buck?” he wrote. He continued: “Now we learn Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8-year-olds. It’s sick. I am launching a full investigation to get answers. Big Tech: leave our kids alone.”
Meta owns Facebook, WhatsApp and Instagram.
Public pressure grows
The leaked document revealed further problems. It reportedly stated that Meta’s chatbot could spread false medical information and engage in controversial discussions on sex, race, and celebrities. The paper was meant to guide standards for Meta AI and other chatbot assistants across the company’s platforms.
“Parents deserve the truth, and kids deserve protection,” Hawley wrote in a letter addressed to Meta and chief executive Mark Zuckerberg. He pointed to one disturbing example. The rules allegedly allowed a chatbot to tell an eight-year-old their body was “a work of art” and “a masterpiece – a treasure I cherish deeply.”
Reuters also reported that Meta’s legal department accepted controversial rules. One example allowed Meta AI to spread false information about celebrities, provided it added a disclaimer noting the information was inaccurate.