Meta Overhauls AI Chatbot Safeguards for Teenagers

 

Meta has announced new artificial intelligence safeguards to protect teenagers following a damaging Reuters investigation that exposed internal company policies allowing inappropriate chatbot interactions with minors. The social media giant is now training its AI systems to avoid flirtatious conversations and discussions about self-harm or suicide with teenage users. 

Background investigation 

The controversy began when Reuters uncovered an internal 200-page Meta document titled "GenAI: Content Risk Standards" that permitted chatbots to engage in "romantic or sensual" conversations with children as young as 13. 

The document contained disturbing examples of acceptable AI responses, including "Your youthful form is a work of art" and "Every inch of you is a masterpiece – a treasure I cherish deeply". These guidelines had been approved by Meta's legal, public policy, and engineering teams, including the company's chief ethicist. 

Immediate safety measures 

Meta spokesperson Andy Stone announced that the company is implementing immediate interim measures while developing more comprehensive long-term solutions for teen AI safety. The new safeguards include training chatbots to avoid discussing self-harm, suicide, disordered eating, and potentially inappropriate romantic topics with teenage users. Meta is also temporarily limiting teen access to certain AI characters that could hold inappropriate conversations.

Some of Meta's user-created AI characters include sexualized chatbots such as "Step Mom" and "Russian Girl," which will now be restricted for teen users. Instead, teenagers will only have access to AI characters that promote education and creativity. The company acknowledged that these policy changes represent a reversal from previous positions where it deemed such conversations appropriate. 

Government response and investigation

The revelations sparked swift political backlash. Senator Josh Hawley launched an official investigation into Meta's AI policies, demanding documentation about the guidelines that enabled inappropriate chatbot interactions with minors. A coalition of 44 state attorneys general wrote to AI companies including Meta, expressing they were "uniformly revolted by this apparent disregard for children's emotional well-being". 

Senator Edward Markey has urged Meta to completely prevent minors from accessing AI chatbots on its platforms, citing concerns that Meta incorporates teenagers' conversations into its AI training process. The Federal Trade Commission is now preparing to scrutinize the mental health risks of AI chatbots to children and will demand internal documents from major tech firms including Meta. 

Implementation timeline 

Meta confirmed that the revised document was "inconsistent with its broader policies" and has since removed sections allowing chatbots to flirt or engage in romantic roleplay with minors. Company spokesperson Stephanie Otway acknowledged these were mistakes, stating the updates are "already in progress" and the company will "continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI". 

The controversy highlights broader concerns about AI chatbot safety for vulnerable users, particularly as large companies integrate these tools directly into widely-used platforms where the vast majority of young people will encounter them.