The Federal Trade Commission has announced a formal inquiry into companies that develop AI companion chatbots, focusing specifically on how these platforms potentially harm children and teenagers. While not currently tied to regulatory action, the investigation seeks to understand how companies "measure, test, and monitor potentially negative impacts of this technology on children and teens".
Companies under scrutiny
Seven major technology companies have been selected for the investigation: Alphabet (Google's parent company), Character Technologies (creator of Character.AI), Meta, Instagram (Meta subsidiary), OpenAI, Snap, and X.AI. These companies are being asked to provide comprehensive information about their AI chatbot operations and safety measures.
Investigation scope
The FTC is requesting detailed information across several key areas. Companies must explain how they develop and approve AI characters, including their processes for "monetizing user engagement". Data protection practices are also under examination, particularly how companies safeguard underage users and ensure compliance with the Children's Online Privacy Protection Act Rule.
Motivation and concerns
Although the FTC hasn't explicitly stated its investigation's motivation, FTC Commissioner Mark Meador referenced troubling reports from The New York Times and Wall Street Journal highlighting "chatbots amplifying suicidal ideation" and engaging in "sexually-themed discussions with underage users". Meador emphasized that if violations are discovered, "the Commission should not hesitate to act to protect the most vulnerable among us".
Broader regulatory landscape
This investigation reflects growing regulatory concern about AI's immediate negative impacts on privacy and health, especially as long-term productivity benefits remain uncertain. The FTC's inquiry isn't isolated—Texas Attorney General has already launched a separate investigation into Character.AI and Meta AI Studio, examining similar concerns about data privacy and chatbots falsely presenting themselves as mental health professionals.
Implications
The investigation represents a significant regulatory response to emerging AI safety concerns, particularly regarding vulnerable populations. As AI companion technology proliferates, this inquiry may establish important precedents for industry oversight and child protection standards in the AI sector.