Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label chatbot security. Show all posts

Meta.ai Privacy Lapse Exposes User Chats in Public Feed

 

Meta’s new AI-driven chatbot platform, Meta.ai, launched recently with much fanfare, offering features like text and voice chats, image generation, and video restyling. Designed to rival platforms like ChatGPT, the app also includes a Discover feed, a space intended to showcase public content generated by users. However, what Meta failed to communicate effectively was that many users were unintentionally sharing their private conversations in this feed—sometimes with extremely sensitive content attached. 

In May, journalists flagged the issue when they discovered public chats revealing deeply personal user concerns—ranging from financial issues and health anxieties to legal troubles. These weren’t obscure posts either; they appeared in a publicly accessible area of the app, often containing identifying information. Conversations included users seeking help with medical diagnoses, children talking about personal experiences, and even incarcerated individuals discussing legal strategies—none of whom appeared to realize their data was visible to others. 

Despite some recent tweaks to the app’s sharing settings, disturbing content still appears on the Discover feed. Users unknowingly uploaded images and video clips, sometimes including faces, alongside alarming or bizarre prompts. One especially troubling instance featured a photo of a child at school, accompanied by a prompt instructing the AI to “make him cry.” Such posts reflect not only poor design choices but also raise ethical questions about the purpose and moderation of the Discover feed itself. 

The issue evokes memories of other infamous data exposure incidents, such as AOL’s release of anonymized user searches in 2006, which provided unsettling insight into private thoughts and behaviors. While social media platforms are inherently public, users generally view AI chat interactions as private, akin to using a search engine. Meta.ai blurred that boundary—perhaps unintentionally, but with serious consequences. Many users turned to Meta.ai seeking support, companionship, or simple productivity help. Some asked for help with job listings or obituary writing, while others vented emotional distress or sought comfort during panic attacks. 

In some cases, users left chats expressing gratitude—believing the bot had helped. But a growing number of conversations end in frustration or embarrassment when users realize the bot cannot deliver on its promises or that their content was shared publicly. These incidents highlight a disconnect between how users engage with AI tools and how companies design them. Meta’s ambition to merge AI capabilities with social interaction seems to have ignored the emotional and psychological expectations users bring to private-sounding features. 

For those using Meta.ai as a digital confidant, the lack of clarity around privacy settings has turned an experiment in convenience into a public misstep. As AI systems become more integrated into daily life, companies must rethink how they handle user data—especially when users assume privacy. Meta.ai’s rocky launch serves as a cautionary tale about transparency, trust, and design in the age of generative AI.

Experts Warn: AI Chatbots a ‘Treasure Trove’ for Criminals, Avoid 'Free Accounts

 

Cybersecurity experts have informed The U.S. Sun that chatbots represent a "treasure trove" ripe for exploitation by criminals. The intelligence of artificial intelligence chatbots is advancing rapidly, becoming more accessible and efficient.

Because these AI systems mimic human conversation so well, there's a temptation to trust them and divulge sensitive information.

Jake Moore, Global Cybersecurity Advisor at ESET, explained that while the AI "models" behind chatbots are generally secure, there are hidden dangers.

"With companies like OpenAI and Microsoft leading the development of chatbots, they closely protect their networks and algorithms," Jake stated. "If these were compromised, it would jeopardize their business future."

A New Threat Landscape

Jake pointed out that the primary risk lies in the potential exposure of the information you share with chatbots.

The details you share during chatbot interactions are stored somewhere, similar to how texts, emails, or backup files are stored. The security of these interactions depends on how well they are stored. "The data you input into chatbots is stored on a server and, despite encryption, could become as valuable as personal search histories to cybercriminals," Jake explained.

"There is already a significant amount of personal information being shared. With the anticipated launch of OpenAI's search engine, even more sensitive data will be at risk in a new and attractive space for criminals."

Jake emphasized the importance of using chatbots that encrypt your conversations. Encryption scrambles data, making it unreadable to unauthorized users.

Fortunately, OpenAI guarantees that all ChatGPT conversations are end-to-end encrypted, whether you're a free or paid user. Avoid sharing personal thoughts and intimate details, as they could be accessed by others. 

However, some apps may charge for encryption or not offer it at all. Even encrypted conversations may be used to train chatbot models, although ChatGPT allows users to opt-out and delete their data.

"People must be careful about what they input into chatbots, especially in free accounts that don’t anonymize or encrypt data," Jake advised.

Further, security expert Dr. Martin J. Kraemer from KnowBe4 emphasized the need for caution.

"Never share sensitive information with a chatbot," Dr. Kraemer advised. "You may need to share certain details like a flight booking code with an airline chatbot, but that should be an exception. It's safer to call directly instead of using a chatbot. Never share your password or other authentication details with a chatbot. Also, avoid sharing personal thoughts and intimate details, as these could be accessed by others."