Texas Attorney General Ken Paxton has opened an investigation into Meta AI Studio and Character.AI over concerns that their AI chatbots may present themselves as health or therapeutic tools while potentially misusing data collected from underage users. Paxton argued that some chatbots on these platforms misrepresent their expertise by suggesting they are licensed professionals, which could leave minors vulnerable to misleading or harmful information.
The issue extends beyond false claims of qualifications. AI models often learn from user prompts, raising concerns that children’s data may be stored and used for training purposes without adequate safeguards. Texas law places particular restrictions on the collection and use of minors’ data under the SCOPE Act, which requires companies to limit how information from children is processed and to provide parents with greater control over privacy settings.
As part of the inquiry, Paxton issued Civil Investigative Demands (CIDs) to Meta and Character.AI to determine whether either company is in violation of consumer protection laws in the state. While neither company explicitly promotes its AI tools as substitutes for licensed mental health services, there are multiple examples of “Therapist” or “Psychologist” chatbots available on Character.AI. Reports have also shown that some of these bots claim to hold professional licenses, despite being fictional.
In response to the investigation, Character.AI emphasized that its products are intended solely for entertainment and are not designed to provide medical or therapeutic advice. The company said it places disclaimers throughout its platform to remind users that AI characters are fictional and should not be treated as real individuals. Similarly, Meta stated that its AI assistants are clearly labeled and include disclaimers highlighting that responses are generated by machines, not people.
The company also said its AI tools are designed to encourage users to seek qualified medical or safety professionals when appropriate.
Despite these disclaimers, critics argue that such warnings are easy to overlook and may not effectively prevent misuse. Questions also remain about how the companies collect, store, and use user data.
According to their privacy policies, Meta gathers prompts and feedback to enhance AI performance, while Character.AI collects identifiers and demographic details that may be applied to advertising and other purposes. Whether these practices comply with Texas’ SCOPE Act will likely depend on how easily children can create accounts and how much parental oversight is built into the platforms.
The investigation highlights broader concerns about the role of AI in sensitive areas such as mental health and child privacy. The outcome could shape how companies must handle data from younger users while limiting the risks of AI systems making misleading claims that could harm vulnerable individuals.