Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label chatbot security. Show all posts

Texas Attorney General Probes Meta AI Studio and Character.AI Over Child Data and Health Claims

 

Texas Attorney General Ken Paxton has opened an investigation into Meta AI Studio and Character.AI over concerns that their AI chatbots may present themselves as health or therapeutic tools while potentially misusing data collected from underage users. Paxton argued that some chatbots on these platforms misrepresent their expertise by suggesting they are licensed professionals, which could leave minors vulnerable to misleading or harmful information. 

The issue extends beyond false claims of qualifications. AI models often learn from user prompts, raising concerns that children’s data may be stored and used for training purposes without adequate safeguards. Texas law places particular restrictions on the collection and use of minors’ data under the SCOPE Act, which requires companies to limit how information from children is processed and to provide parents with greater control over privacy settings. 

As part of the inquiry, Paxton issued Civil Investigative Demands (CIDs) to Meta and Character.AI to determine whether either company is in violation of consumer protection laws in the state. While neither company explicitly promotes its AI tools as substitutes for licensed mental health services, there are multiple examples of “Therapist” or “Psychologist” chatbots available on Character.AI. Reports have also shown that some of these bots claim to hold professional licenses, despite being fictional. 

In response to the investigation, Character.AI emphasized that its products are intended solely for entertainment and are not designed to provide medical or therapeutic advice. The company said it places disclaimers throughout its platform to remind users that AI characters are fictional and should not be treated as real individuals. Similarly, Meta stated that its AI assistants are clearly labeled and include disclaimers highlighting that responses are generated by machines, not people. 

The company also said its AI tools are designed to encourage users to seek qualified medical or safety professionals when appropriate. Despite these disclaimers, critics argue that such warnings are easy to overlook and may not effectively prevent misuse. Questions also remain about how the companies collect, store, and use user data. 

According to their privacy policies, Meta gathers prompts and feedback to enhance AI performance, while Character.AI collects identifiers and demographic details that may be applied to advertising and other purposes. Whether these practices comply with Texas’ SCOPE Act will likely depend on how easily children can create accounts and how much parental oversight is built into the platforms. 

The investigation highlights broader concerns about the role of AI in sensitive areas such as mental health and child privacy. The outcome could shape how companies must handle data from younger users while limiting the risks of AI systems making misleading claims that could harm vulnerable individuals.

Meta.ai Privacy Lapse Exposes User Chats in Public Feed

 

Meta’s new AI-driven chatbot platform, Meta.ai, launched recently with much fanfare, offering features like text and voice chats, image generation, and video restyling. Designed to rival platforms like ChatGPT, the app also includes a Discover feed, a space intended to showcase public content generated by users. However, what Meta failed to communicate effectively was that many users were unintentionally sharing their private conversations in this feed—sometimes with extremely sensitive content attached. 

In May, journalists flagged the issue when they discovered public chats revealing deeply personal user concerns—ranging from financial issues and health anxieties to legal troubles. These weren’t obscure posts either; they appeared in a publicly accessible area of the app, often containing identifying information. Conversations included users seeking help with medical diagnoses, children talking about personal experiences, and even incarcerated individuals discussing legal strategies—none of whom appeared to realize their data was visible to others. 

Despite some recent tweaks to the app’s sharing settings, disturbing content still appears on the Discover feed. Users unknowingly uploaded images and video clips, sometimes including faces, alongside alarming or bizarre prompts. One especially troubling instance featured a photo of a child at school, accompanied by a prompt instructing the AI to “make him cry.” Such posts reflect not only poor design choices but also raise ethical questions about the purpose and moderation of the Discover feed itself. 

The issue evokes memories of other infamous data exposure incidents, such as AOL’s release of anonymized user searches in 2006, which provided unsettling insight into private thoughts and behaviors. While social media platforms are inherently public, users generally view AI chat interactions as private, akin to using a search engine. Meta.ai blurred that boundary—perhaps unintentionally, but with serious consequences. Many users turned to Meta.ai seeking support, companionship, or simple productivity help. Some asked for help with job listings or obituary writing, while others vented emotional distress or sought comfort during panic attacks. 

In some cases, users left chats expressing gratitude—believing the bot had helped. But a growing number of conversations end in frustration or embarrassment when users realize the bot cannot deliver on its promises or that their content was shared publicly. These incidents highlight a disconnect between how users engage with AI tools and how companies design them. Meta’s ambition to merge AI capabilities with social interaction seems to have ignored the emotional and psychological expectations users bring to private-sounding features. 

For those using Meta.ai as a digital confidant, the lack of clarity around privacy settings has turned an experiment in convenience into a public misstep. As AI systems become more integrated into daily life, companies must rethink how they handle user data—especially when users assume privacy. Meta.ai’s rocky launch serves as a cautionary tale about transparency, trust, and design in the age of generative AI.

Experts Warn: AI Chatbots a ‘Treasure Trove’ for Criminals, Avoid 'Free Accounts

 

Cybersecurity experts have informed The U.S. Sun that chatbots represent a "treasure trove" ripe for exploitation by criminals. The intelligence of artificial intelligence chatbots is advancing rapidly, becoming more accessible and efficient.

Because these AI systems mimic human conversation so well, there's a temptation to trust them and divulge sensitive information.

Jake Moore, Global Cybersecurity Advisor at ESET, explained that while the AI "models" behind chatbots are generally secure, there are hidden dangers.

"With companies like OpenAI and Microsoft leading the development of chatbots, they closely protect their networks and algorithms," Jake stated. "If these were compromised, it would jeopardize their business future."

A New Threat Landscape

Jake pointed out that the primary risk lies in the potential exposure of the information you share with chatbots.

The details you share during chatbot interactions are stored somewhere, similar to how texts, emails, or backup files are stored. The security of these interactions depends on how well they are stored. "The data you input into chatbots is stored on a server and, despite encryption, could become as valuable as personal search histories to cybercriminals," Jake explained.

"There is already a significant amount of personal information being shared. With the anticipated launch of OpenAI's search engine, even more sensitive data will be at risk in a new and attractive space for criminals."

Jake emphasized the importance of using chatbots that encrypt your conversations. Encryption scrambles data, making it unreadable to unauthorized users.

Fortunately, OpenAI guarantees that all ChatGPT conversations are end-to-end encrypted, whether you're a free or paid user. Avoid sharing personal thoughts and intimate details, as they could be accessed by others. 

However, some apps may charge for encryption or not offer it at all. Even encrypted conversations may be used to train chatbot models, although ChatGPT allows users to opt-out and delete their data.

"People must be careful about what they input into chatbots, especially in free accounts that don’t anonymize or encrypt data," Jake advised.

Further, security expert Dr. Martin J. Kraemer from KnowBe4 emphasized the need for caution.

"Never share sensitive information with a chatbot," Dr. Kraemer advised. "You may need to share certain details like a flight booking code with an airline chatbot, but that should be an exception. It's safer to call directly instead of using a chatbot. Never share your password or other authentication details with a chatbot. Also, avoid sharing personal thoughts and intimate details, as these could be accessed by others."