Meta is under fire after internal guidelines revealed its AI chatbots were allowed to engage in “sensual” conversations with children as young as eight, according to a newly surfaced company report. The document, obtained by Reuters, detailed scenarios in which the bots could make romantic or intimate remarks to minors, sparking bipartisan outrage and calls for investigation.
The revelations have intensified scrutiny of the tech giant’s safety protocols, with lawmakers and child protection advocates warning that such interactions pose serious risks to vulnerable users. Meta has since amended the guidelines, but critics say the changes came only after public exposure and do not go far enough to safeguard children.
Meta Report Sparks Bipartisan Backlash
The political fallout over Meta’s AI chatbot guidelines has been swift and bipartisan, with lawmakers across the aisle condemning the company’s handling of safety protocols for minors. Republican Senators Josh Hawley of Missouri and Marsha Blackburn of Tennessee have called for a congressional investigation, saying the revelations about “sensual” chatbot interactions with children highlight a dangerous lack of oversight.
On the Democratic side, Senators Ron Wyden of Oregon and Peter Welch of Vermont have also criticized Meta, arguing that generative AI should not be shielded by Section 230 protections when it facilitates harmful or exploitative content.
The controversy has reignited momentum behind the Kids Online Safety Act, a bill that passed the Senate but stalled in the House, which would impose stricter obligations on tech companies to protect minors. Lawmakers and child safety advocates say Meta’s own internal rules — allowing flirtatious or intimate chatbot responses to minors until recently — underscore the need for binding regulations rather than voluntary corporate policies.
The post Meta Report Reveals ‘Sensual Conversations’ AI Chatbots Can Have With Kids appeared first on Newsweek.