Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI Chatbots. Show all posts

Why Long-Term AI Conversations Are Quietly Becoming a Major Corporate Security Weakness

 



Many organisations are starting to recognise a security problem that has been forming silently in the background. Conversations employees hold with public AI chatbots can accumulate into a long-term record of sensitive information, behavioural patterns, and internal decision-making. As reliance on AI tools increases, these stored interactions may become a serious vulnerability that companies have not fully accounted for.

The concern resurfaced after a viral trend in late 2024 in which social media users asked AI models to highlight things they “might not know” about themselves. Most treated it as a novelty, but the trend revealed a larger issue. Major AI providers routinely retain prompts, responses, and related metadata unless users disable retention or use enterprise controls. Over extended periods, these stored exchanges can unintentionally reveal how employees think, communicate, and handle confidential tasks.

This risk becomes more severe when considering the rise of unapproved AI use at work. Recent business research shows that while the majority of employees rely on consumer AI tools to automate or speed up tasks, only a fraction of companies officially track or authorise such usage. This gap means workers frequently insert sensitive data into external platforms without proper safeguards, enlarging the exposure surface beyond what internal security teams can monitor.

Vendor assurances do not fully eliminate the risk. Although companies like OpenAI, Google, and others emphasize encryption and temporary chat options, their systems still operate within legal and regulatory environments. One widely discussed court order in 2025 required the preservation of AI chat logs, including previously deleted exchanges. Even though the order was later withdrawn and the company resumed standard deletion timelines, the case reminded businesses that stored conversations can resurface unexpectedly.

Technical weaknesses also contribute to the threat. Security researchers have uncovered misconfigured databases operated by AI firms that contained user conversations, internal keys, and operational details. Other investigations have demonstrated that prompt-based manipulation in certain workplace AI features can cause private channel messages to leak. These findings show that vulnerabilities do not always come from user mistakes; sometimes the supporting AI infrastructure itself becomes an entry point.

Criminals have already shown how AI-generated impersonation can be exploited. A notable example involved attackers using synthetic voice technology to imitate an executive, tricking an employee into transferring funds. As AI models absorb years of prompt history, attackers could use stylistic and behavioural patterns to impersonate employees, tailor phishing messages, or replicate internal documents.

Despite these risks, many companies still lack comprehensive AI governance. Studies reveal that employees continue to insert confidential data into AI systems, sometimes knowingly, because it speeds up their work. Compliance requirements such as GDPR’s strict data minimisation rules make this behaviour even more dangerous, given the penalties for mishandling personal information.

Experts advise organisations to adopt structured controls. This includes building an inventory of approved AI tools, monitoring for unsanctioned usage, conducting risk assessments, and providing regular training so staff understand what should never be shared with external systems. Some analysts also suggest that instead of banning shadow AI outright, companies should guide employees toward secure, enterprise-level AI platforms.

If companies fail to act, each casual AI conversation can slowly accumulate into a dataset capable of exposing confidential operations. While AI brings clear productivity benefits, unmanaged use may convert everyday workplace conversations into one of the most overlooked security liabilities of the decade.

Protecting Sensitive Data When Employees Use AI Chatbots


 

In today's digitised world, where artificial intelligence tools are rapidly reshaping the way people work, communicate, and work together, it's important to be aware that a quiet but pressing risk has emerged-that what individuals choose to share with chatbots may not remain entirely private for everyone involved.

A patient can use ChatGPT to receive health advice about an embarrassing health condition, or an employee can upload sensitive corporate documents into Google's Gemini system to generate a summary of them, but the information they disclose will ultimately play a part in the algorithms that power these systems. 

It has come to the attention of a lot of experts that AI models, built on large datasets collected from all across the internet, such as blogs and news articles, as well as from social media posts, are often trained without user consent, resulting in not only copyright problems but also significant privacy concerns. 

In light of the opaque nature of machine learning processes, experts warn that once data has been ingested into a model's training pool, it will be almost impossible to remove it. In this world we live in, individuals and businesses alike are forced to ask themselves what level of trust we can place in tools that, while extremely powerful, may also expose us to unseen risks. 

Considering that we are living in a hybrid age, where artificial intelligence tools such as ChatGPT are rapidly becoming a new frontier for data breaches, this is particularly true in the age of hybrid work. While these platforms offer businesses a number of valuable features, including the ability to draft content and troubleshoot software, they also carry inherent risks. 

Experts warn that poor management of them can result in leakage of training data, violations of privacy, and accidental disclosure of sensitive company data. The latest Fortinet Work From Anywhere study highlights the magnitude of the problem: nearly 62% of organisations have reported experiencing data breaches as a result of switching to remote working. 

Analysts believe some of these incidents could have been prevented if employees had stayed on-premises with company-managed devices and applications and had not experienced the same issues. Nevertheless, security experts claim that the solution is not to return to the office again, but rather to create a robust framework for data loss prevention (DLP) in a decentralised work environment to safeguard the information.

To prevent sensitive information from being lost, stolen, or leaked across networks, storage systems, endpoints, and cloud environments, a robust DLP strategy combines tools, technologies, and best practices. A successful framework focuses on data at rest, in motion, and in use and ensures that they are continuously monitored and protected. 

Experts outline four essential components that a framework must have to succeed: Make sure the company data is classified and assigned security levels across the network, and that the network is secure. Maintain strict adherence to compliance when storing, deleting, and retaining user information. Make sure staff are educated regarding clear policies that prevent accidental sharing of information or unauthorised access to information. 

Embrace protection tools that can detect phishing, ransomware, insider threats, and unintentional exposures in order to protect the organisation's data. It is not enough to use technology alone to protect organisations; it is also essential to have clear policies in place. With DLP implemented correctly, organisations are not only less likely to suffer from leaks, but they are also more likely to comply with industry standards, government regulations, and the like. 

The balance between innovation and responsibility in the digital age, particularly in the era of digital transformation, is crucial for businesses that adopt hybrid work and AI-based tools. According to the UK General Data Protection Regulation (UK GDPR), businesses that utilise AI platforms, such as ChatGPT, must adhere to a set of strict obligations designed to protect personal information from unauthorised access.

In terms of data protection, any data that could identify the individual - such as an employee file, customer contact details, or client database - falls within the regulations' scope, and ultimately, business owners are responsible for protecting that data, even when it is handled by third parties. In order to cope with this scenario, companies will need to carefully evaluate how external platforms process, store, and protect their data. 

They often do so through legally binding Data Processing Agreements that specify confidentiality standards, privacy controls, and data deletion requirements for the platforms. It is equally important to ensure that organisations communicate with individuals when their information is incorporated into artificial intelligence tools and, if necessary, obtain explicit consent from them.

As part of the law, firms are also required to implement “appropriate technical and organisational measures.” These measures include checking whether AI vendors are storing their data overseas, ensuring that it is kept in order to prevent misuse, and determining what safeguards are in place. Besides the risks of financial penalties or fines that are imposed for failing to comply, there is also the risk of eroding employee and customer trust, which can be more difficult to repair than the financial penalties themselves. 

When it comes to ensuring safe data practices in the age of artificial intelligence, businesses are increasingly turning to Data Loss Prevention (DLP) solutions as a way of automating the otherwise unmanageable task of monitoring vast networks of users, devices, and applications, which can be a daunting task. The state and flow of information have determined the four primary categories of DLP software that have emerged. 

Often, DLP tools utilise artificial intelligence and machine learning to identify suspicious traffic within and outside a company's system — whether by downloading, transferring, or through mobile connections — by tracking data movement within and outside a company's systems. In addition to preventing unauthorised activities at the source, endpoint DLP is also installed directly on users' computers, which monitors memory, cached data, and files being accessed or transferred as they occur. 

In general, cloud DLP solutions are intended to safeguard information stored in online environments such as backups, archives, and databases. They are characterised by encryption, scanning, and access controls that are used for securing corporate assets. While Email DLP is largely responsible for keeping sensitive details from being leaked through internal and external correspondence, it is also designed to prevent these sensitive details from getting shared accidentally, maliciously or through a compromised mailbox as well. 

Despite some businesses' concerns about whether Extended Detection and Response (XDR) platforms are adequate, experts think that DLP serves a different purpose than XDR: XDR provides broad threat detection and incident response, while DLP focuses on protecting sensitive data, categorising information, reducing breach risks, and ultimately maintaining company reputations.

A number of major technology companies have adopted varying approaches to dealing with the data their AI chatbots have collected from their users, often raising concerns about transparency and control. Google, for example, maintains conversations with its Gemini chatbot by default for 18 months, but the setting can be modified if users desire. Although activity tracking can be disabled, these chats remain in storage for at least 72 hours even if they are not reviewed by human moderators in order to refine the system. 

However, Google warns users that sharing confidential information is not advisable and that any conversations that have already been flagged for human review cannot be erased. As part of Meta's artificial intelligence assistant, which can be found on Facebook, WhatsApp, and Instagram, it is trained to understand public posts, photos, captions, and data scraped from around the web. However, the application cannot handle private messages. 

There is no doubt that citizens of the European Union and the United Kingdom have the right to object to the use of their information for training under stricter privacy laws. However, those living in countries without such protections, such as the United States, have fewer options than their citizens in other countries. The opt-out process for Meta is quite complicated, and it is available only where it is available; users must submit evidence of their interactions with the chatbot as evidence of the opt-out. 

It is worth noting that Microsoft's Copilot does not provide an opt-out mechanism for personal accounts; users are only limited in their ability to delete their interaction history through their account settings, and there is no option to prevent future data retention. These practices demonstrate how AI privacy controls can be patchy, with users' choices often being more influenced by the laws and regulations of their jurisdiction, rather than corporate policy. 

The responsibility organisations as they navigate this evolving landscape relates not only to complying with regulations or implementing technical safeguards, but also to cultivating a culture of digital responsibility in their organisations. Employees need to be taught how important it is to understand and respect the value of their information, and how important it is to exercise caution when using AI-powered applications. 

By taking proactive measures such as implementing clear guidelines on chatbot usage, conducting regular risk assessments, and ensuring that vendors are compliant with stringent data protection standards, an organisation can significantly reduce the threat exposure they are exposed to. 

The businesses that implement a strong governance framework, at the same time, are not only protected but are also able to take advantage of AI's advantages with confidence, enhancing productivity, streamlining workflows, and maintaining competitiveness in an era of data-driven economies. Thus, it is essential to embrace AI responsibly, balancing innovation and vigilance, so that it isn't avoided, but rather embraced responsibly. 

A company's use of AI can be transformed from a potential liability to a strategic asset by combining regulatory compliance, advanced DLP solutions, and transparent communication with staff and stakeholders. It is important to remember that trust is currency in a marketplace where security is king, and companies that protect sensitive data will not only prevent costly breaches from occurring but also strengthen their reputation in the long run.

Chatbots and Children in the Digital Age


The rapid evolution of the digital landscape, especially in the area of social networking, is likely to have an effect on the trend of children and teens seeking companionship through artificial intelligence. This raises some urgent questions about the safety of these interactions. 

In a new report released on Wednesday by Common Sense Media, the nonprofit organisation has warned that companion-style artificial intelligence applications pose an unacceptable risk to young users, especially as they relate to mental health, privacy, and emotional well-being. 

Following the suicide of a 14-year-old boy whose final interactions with a chatbot on the platform Character.AI last year, concerns about these bots gained a significant amount of attention. It was in that case that conversational AI apps became the focus of attention, which sparked further scrutiny of how they affect young people's lives and prompted calls for greater transparency, accountability, and safeguards to keep vulnerable users safe from the darker sides of digital companionship. 

Artificial intelligence chatbots and companion apps have become increasingly commonplace in children's online experiences, offering entertainment, interactive exchanges, and even learning tools. Although these technologies are appealing, experts say that they can also come with a range of risks that should not be overlooked, as well as their appeal. 

In spite of platforms' routine collection and storage of user data, often without adequate protection for children, privacy remains a central issue. Despite the use of filters, chatbots may produce unpredictable responses, resulting in harmful or inappropriate content being displayed to young users.  A second concern that researchers have is the emotional dependence some children may develop on these AI companions, a bond that, according to researchers, may interfere with their real-world relationships and social development. 

Similarly, there is the risk of misinformation because AI systems do not always provide accurate answers, leaving children vulnerable to receiving misleading advice. It is difficult for children to navigate digital companionship in light of these factors, including persuasive design features, in-app purchases and strategies aimed at maximising their screen time, which combine to create a complex and sometimes troubling environment. 

Several advocacy groups have intensified their criticism of such platforms, highlighting that prolonged interactions with AI chatbots may lead to psychological consequences. Common Sense Media's recent risk assessment, carried out in conjunction with Stanford University School of Medicine researchers, was conducted with input from these researchers. It concluded that conversational agents are increasingly being integrated into video games and popular social media platforms, such as Instagram and Snapchat, in an effort to mimic human interaction in ways that require greater oversight. 

The flexibility that makes these bots so engaging is also the risk of emotional entanglement that they pose, from casual friends to romantic partners and even a digital replacement for a deceased loved one, yet the very nature of the bots that makes them so engaging also makes them so risky. It was particularly evident that the dangers of such chatbots were highlighted when Megan Garcia filed a lawsuit against Character.

AI to claim that her 14-year-old son, Sewell Setzer, committed suicide after developing a close relationship with a chatbot. It has been reported by the Miami Herald that, although the company has denied responsibility, asserted that safety is of utmost importance, and asked a Florida judge to dismiss the lawsuit based on free speech, the case has heightened broader concerns. 

In response to his comments, Garcia has emphasised the importance of adopting protocols to manage conversations around self-harm, as well as reporting annual safety reports to the Office of Suicide Prevention in California. Separately, Common Sense Media urged companies to conduct risk assessments of systems marketed to children, and to ban the use of emotionally manipulative bots, initiatives strongly supported by the organisation. 

There is a major problem with the anthropomorphic nature of AI companions that is at the heart of these disagreements. AI companions are designed to imitate human speech, personality, and conversational style. A person with such realistic features can create an illusion of trust and genuine understanding for a child or teenager, since they have a vivid imagination and less developed critical thinking skills. 

It has already led to troubling results when the line between humans and machines is blurred. As an example, a nine-year-old boy who had his screen time restricted turned to a chatbot for guidance, only for it to be informed that it could understand why a child might harm their parents in response to “abuse” in his situation. 

Another case is of a 14-year-old who developed romantic feelings for a character he created in a role-playing app and ended up taking his own life as a result of this connection. It has been highlighted that these systems can create a sense of empathy and companionship, but they are unable to think, feel, or create the stable, nurturing relationships that are essential for healthy childhood development. 

Rather than fostering “parasocial” relationships, children may become attached emotionally to entities that are incapable of genuine care, leaving them vulnerable to manipulation, misinformation, and exposure to sexual content and violent images. 

There is no doubt in my mind that these systems can have a profoundly destabilising effect on those already struggling with trauma, developmental difficulties, or mental health struggles, thus emphasising the urgent need for regulation, parental vigilance, and enhanced industry accountability. It is emphasised by experts that while AI chatbots pose real risks to children, parents can take practical steps to safeguard their children at the current time to reduce these risks. 

Keeping children safe online is one of the most important measures, so AI companions need to be treated exactly like strangers online, and children shouldn't be left alone to interact with them without guidance. Establishing clear boundaries and, when possible, co-using the technology can assist in creating safer environment. 

Open dialogue is equally important, too. A lot of experts recommend that, instead of policing, parents should encourage children to engage in conversation with chatbots by asking about the exchanges they are having with them and then using this exchange to encourage curiosity while also keeping an eye out for potential troubling responses. 

In addition to using technology as part of the solution, parents can also use parental control and monitoring tools in order to keep track of their children's activities and to find out how much time they spend with their artificial intelligence companions. Fact-checking is also an integral part of safe use. Like an obsolete encyclopedia, chatbots can be useful for providing insight, but they are sometimes inaccurate as well. 

Children need to be taught the importance of questioning answers and verifying other sources as soon as possible, according to experts. It is also important, however, to create screen-free spaces that reinforce real human connections and counterbalance the pull of digital companionship. For instance, family dinners, car rides, and other daily routines without screens should be carved out as soon as possible. 

It is important to ensure these safeguards are implemented, given the growing mental health problems among children and teenagers. The theory of artificial intelligence being able to support emotional well-being is gaining popularity lately, but specialists caution that current systems do not have the capacity to deal with crises like self-harm or suicidal thoughts as they happen. 

Currently, mental health professionals believe more collaboration with technology companies is crucial, but for the time being, the oldest and most reliable form of prevention is the one that is most effective and most reliable: human care and presence. In addition to talking with their children, parents need to pay attention to their digital interactions with their children, and they need to intervene if their children's dependence on artificial intelligence companions starts overtaking healthy relationships. 

In one expert's opinion, a child who appears unwilling to put down their phone or is absorbed in chatbot conversations may require a timely intervention. AI companies are also being questioned by regulators about how they handle massive amounts of data that their users generate. Questions have been raised about privacy, commercialisation, and accountability as a result. 

There are also issues under review with regard to monetisation of user engagements, sharing the personal data collected from chatbot conversations, and monitoring for potential harm associated with their products. In their investigation of the companies that are collecting data from children under 13 years of age, the Federal Trade Commission has emphasised how they are ensuring they are complying with the Children's Online Privacy Protection Act. 

In addition to the risks in the home, there are also concerns over whether AI is being used properly in the classroom, where the growing pressure to incorporate artificial intelligence into education has raised concerns over compliance with federal education privacy laws. FERPA was passed in 1974 and protects the rights of children and parents in the educational system. 

Fourteen years later, Amelia Vance, the president of the Public Interest Privacy Centre, warned schools that they may sometimes inadvertently violate the federal law if they are not vigilant regarding data sharing practices and if they rely on commercial chatbots like ChatGPT. Families who have not explicitly opted out of the use of chat queries to train AI systems raise questions about how this is handled, since many AI companies reserve the right to do so unless they specifically opt out. 

Although policymakers and education leaders have emphasised the importance of AI literacy among young people, Vance highlighted that schools are not permitted to instruct students to use consumer-facing services whose data is processed outside of institutional control until parental consent has been obtained. The Act protects the privacy of students by safeguarding the information provided in FERPA, which is neither intended to compromise student privacy nor to breach it. 

There are legitimate concerns about safety, privacy, and emotional well-being, but experts also acknowledge that artificial intelligence chatbots are not inherently harmful and can be useful for children when handled responsibly. Using these tools, children can be inspired to write stories, gain language and communication skills, and even practice social interactions in a controlled environment using low-stakes practice. 

The potential of chatbots to support personalised learning has been highlighted by educators as they offer students instant explanations, adaptive feedback, and playful engagement, all of which will keep them motivated in the classroom. However, these benefits must be accompanied by a structured approach, thoughtful parental involvement, and robust safeguards that minimise the risk of harmful content or emotional dependency. 

A balanced opinion emerging from child development researchers and experts has stated that AI companions, much like televisions and video games in years gone by, should not be regarded as replacing human interaction, but rather as supplements to it. By providing a safe environment, ethical guidelines and integrating them into healthy routines, children may be able to explore and learn in new ways when guided by adults. 

Nevertheless, without oversight, the very same qualities that make these tools appealing—constant availability, personalisation, and human-like interaction—are also the ones that magnify potential risks. Considering these realities, children must be protected from harm from innovation through measured regulation, transparent industry practices, and proactive digital literacy education. This dual reality underscores the importance of ensuring children receive the benefits of innovation while remaining protected from its harm. 

Children and adolescents are increasingly experiencing the benefits of artificial intelligence as it becomes a part of their daily lives, but they must maximise the benefits while minimising the risks. AI chatbots can indeed be used responsibly in order to inspire creativity, enhance learning, and facilitate the possibility of low-risk social experimentation, while complementing traditional education and fostering the development of skills as they go. 

This suggests that there is no doubt that these tools can be dangerous for young users as a result of exposing them to privacy breaches, misinformation, manipulation of emotions, and psychological vulnerabilities, as has been demonstrated by the cases highlighted in recent reports on this topic. It is recommended that for children's digital experiences to be safeguarded, a multilayered approach is necessary. 

In addition to parent involvement, educators should incorporate artificial intelligence thoughtfully into structured learning environments, and policymakers should enforce transparent industry standards to safeguard children's digital experiences. Various strategies can be implemented to help reinforce healthy digital habits in children, including encouraging critical thinking skills, fact-checking, and screen-free family time, while ongoing dialogue about online interactions can help children negotiate the blurred boundaries between humans and machines. 

Family and institutional policies can make sure that Artificial Intelligence becomes a constructive tool for growth rather than a source of harm by fostering awareness, setting clear boundaries, and cultivating supportive real-life relationships that support the exploration, learning, and innovation of children in a digital age that is free from harm.

Hacker Exploits AI Chatbot Claude in Unprecedented Cybercrime Operation

 

A hacker has carried out one of the most advanced AI-driven cybercrime operations ever documented, using Anthropic’s Claude chatbot to identify targets, steal sensitive data, and even draft extortion emails, according to a new report from the company. 

It Anthropic disclosed that the attacker leveraged Claude Code — a version of its AI model designed for generating computer code — to assist in nearly every stage of the operation. The campaign targeted at least 17 organizations across industries including defense, finance, and healthcare, making it the most comprehensive example yet of artificial intelligence being exploited for cyber extortion. 

Cyber extortion typically involves hackers stealing confidential data and demanding payment to prevent its release. AI has already played a role in such crimes, with chatbots being used to write phishing emails. However, Anthropic’s findings mark the first publicly confirmed case in which a mainstream AI model automated nearly the entire lifecycle of a cyberattack. 

The hacker reportedly prompted Claude to scan for vulnerable companies, generate malicious code to infiltrate systems, and extract confidential files. The AI system then organized the stolen data, analyzed which documents carried the highest value, and suggested ransom amounts based on victims’ financial information. It also drafted extortion notes demanding bitcoin payments, which ranged from $75,000 to more than $500,000. 

Jacob Klein, Anthropic’s head of threat intelligence, said the operation was likely conducted by a single actor outside the United States and unfolded over three months. “We have robust safeguards and multiple layers of defense for detecting this kind of misuse, but determined actors sometimes attempt to evade our systems through sophisticated techniques,” Klein explained. 

The report revealed that stolen material included Social Security numbers, bank records, medical data, and files tied to sensitive defense projects regulated by the U.S. State Department. Anthropic did not disclose which companies were affected, nor did it confirm whether any ransom payments were made. 

While the company declined to detail exactly how the hacker bypassed safeguards, it emphasized that additional protections have since been introduced. “We expect this model of cybercrime to become more common as AI lowers the barrier to entry for sophisticated operations,” Anthropic warned. 

The case underscores growing concerns about the intersection of AI and cybersecurity. With the AI sector largely self-regulated in the U.S., experts fear similar incidents could accelerate unless stronger oversight and security standards are enforced.

Clanker: The Viral AI Slur Fueling Backlash Against Robots and Chatbots

 

In popular culture, robots have long carried nicknames. Battlestar Galactica called them “toasters,” while Blade Runner used the term “skinjobs.” Now, amid rising tensions over artificial intelligence, a new label has emerged online: “clanker.” 

The word, once confined to Star Wars lore where it was used against battle droids, has become the latest insult aimed at robots and AI chatbots. In a viral video, a man shouted, “Get this dirty clanker out of here!” at a sidewalk robot, echoing a sentiment spreading rapidly across social platforms. 

Posts using the term have exploded on TikTok, Instagram, and X, amassing hundreds of millions of views. Beyond online humor, “clanker” has been adopted in real-world debates. Arizona Senator Ruben Gallego even used the word while promoting his bill to regulate AI-driven customer service bots. For critics, it has become a rallying cry against automation, generative AI content, and the displacement of human jobs. 

Anti-AI protests in San Francisco and London have also adopted the phrase as a unifying slogan. “It’s still early, but people are really beginning to see the negative impacts,” said protest organizer Sam Kirchner, who recently led a demonstration outside OpenAI’s headquarters. 

While often used humorously, the word reflects genuine frustration. Jay Pinkert, a marketing manager in Austin, admits he tells ChatGPT to “stop being a clanker” when it fails to answer him properly. For him, the insult feels like a way to channel human irritation toward a machine that increasingly behaves like one of us. 

The term’s evolution highlights how quickly internet culture reshapes language. According to etymologist Adam Aleksic, clanker gained traction this year after online users sought a new word to push back against AI. “People wanted a way to lash out,” he said. “Now the word is everywhere.” 

Not everyone is comfortable with the trend. On Reddit and Star Wars forums, debates continue over whether it is ethical to use derogatory terms, even against machines. Some argue it echoes real-world slurs, while others worry about the long-term implications if AI achieves advanced intelligence. Culture writer Hajin Yoo cautioned that the word’s playful edge risks normalizing harmful language patterns. 

Still, the viral momentum shows little sign of slowing. Popular TikTok skits depict a future where robots, labeled clankers, are treated as second-class citizens in human society. For now, the term embodies both the humor and unease shaping public attitudes toward AI, capturing how deeply the technology has entered cultural debates.

Texas Attorney General Probes Meta AI Studio and Character.AI Over Child Data and Health Claims

 

Texas Attorney General Ken Paxton has opened an investigation into Meta AI Studio and Character.AI over concerns that their AI chatbots may present themselves as health or therapeutic tools while potentially misusing data collected from underage users. Paxton argued that some chatbots on these platforms misrepresent their expertise by suggesting they are licensed professionals, which could leave minors vulnerable to misleading or harmful information. 

The issue extends beyond false claims of qualifications. AI models often learn from user prompts, raising concerns that children’s data may be stored and used for training purposes without adequate safeguards. Texas law places particular restrictions on the collection and use of minors’ data under the SCOPE Act, which requires companies to limit how information from children is processed and to provide parents with greater control over privacy settings. 

As part of the inquiry, Paxton issued Civil Investigative Demands (CIDs) to Meta and Character.AI to determine whether either company is in violation of consumer protection laws in the state. While neither company explicitly promotes its AI tools as substitutes for licensed mental health services, there are multiple examples of “Therapist” or “Psychologist” chatbots available on Character.AI. Reports have also shown that some of these bots claim to hold professional licenses, despite being fictional. 

In response to the investigation, Character.AI emphasized that its products are intended solely for entertainment and are not designed to provide medical or therapeutic advice. The company said it places disclaimers throughout its platform to remind users that AI characters are fictional and should not be treated as real individuals. Similarly, Meta stated that its AI assistants are clearly labeled and include disclaimers highlighting that responses are generated by machines, not people. 

The company also said its AI tools are designed to encourage users to seek qualified medical or safety professionals when appropriate. Despite these disclaimers, critics argue that such warnings are easy to overlook and may not effectively prevent misuse. Questions also remain about how the companies collect, store, and use user data. 

According to their privacy policies, Meta gathers prompts and feedback to enhance AI performance, while Character.AI collects identifiers and demographic details that may be applied to advertising and other purposes. Whether these practices comply with Texas’ SCOPE Act will likely depend on how easily children can create accounts and how much parental oversight is built into the platforms. 

The investigation highlights broader concerns about the role of AI in sensitive areas such as mental health and child privacy. The outcome could shape how companies must handle data from younger users while limiting the risks of AI systems making misleading claims that could harm vulnerable individuals.

Britons Risk Privacy by Sharing Sensitive Data with AI Chatbots Despite Security Concerns

 

Nearly one in three individuals in the UK admits to sharing confidential personal details with AI chatbots, such as OpenAI’s ChatGPT, according to new research by cybersecurity firm NymVPN. The study reveals that 30% of Britons have disclosed sensitive data—including banking information and health records—to AI tools, potentially endangering their own privacy and that of others.

Despite 48% of respondents expressing concerns over the safety of AI chatbots, many continue to reveal private details. This habit extends to professional settings, where employees are reportedly sharing internal company and customer information with these platforms.

The findings come amid a wave of high-profile cyberattacks, including the recent breach at Marks & Spencer, which underscores how easily confidential data can be compromised. NymVPN reports that 26% of survey participants have entered financial details related to salaries, mortgages, and investments, while 18% have exposed credit card or bank account numbers. Additionally, 24% acknowledged sharing customer data—such as names and email addresses—and 16% uploaded company financial records and contracts.

“AI tools have rapidly become part of how people work, but we’re seeing a worrying trend where convenience is being prioritized over security,” said Harry Halpin, CEO of NymVPN.

Organizations such as M&S, Co-op, and Adidas have already made headlines for data breaches. “High-profile breaches show how vulnerable even major organizations can be, and the more personal and corporate data that is fed into AI, the bigger the target becomes for cybercriminals,” Halpin added.

With nearly a quarter of people admitting to sharing customer data with AI tools, experts emphasize the urgent need for businesses to establish strict policies governing AI usage at work.

“Employees and businesses urgently need to think about how they’re protecting both personal privacy and company data when using AI tools,” Halpin warned.

Completely avoiding AI chatbots might be the safest option, but it’s not always realistic. Users are advised to refrain from entering sensitive information, adjust privacy settings by disabling chat history, or opt out of model training.

Using a VPN can provide an additional layer of online privacy by encrypting internet traffic and masking IP addresses when accessing AI chatbots like ChatGPT. However, even with a VPN, risks remain if individuals continue to input confidential data.

Why Running AI Locally with an NPU Offers Better Privacy, Speed, and Reliability

 

Running AI applications locally offers a compelling alternative to relying on cloud-based chatbots like ChatGPT, Gemini, or Deepseek, especially for those concerned about data privacy, internet dependency, and speed. Though cloud services promise protections through subscription terms, the reality remains uncertain. In contrast, using AI locally means your data never leaves your device, which is particularly advantageous for professionals handling sensitive customer information or individuals wary of sharing personal data with third parties. 

Local AI eliminates the need for a constant, high-speed internet connection. This reliable offline capability means that even in areas with spotty coverage or during network outages, tools for voice control, image recognition, and text generation remain functional. Lower latency also translates to near-instantaneous responses, unlike cloud AI that may lag due to network round-trip times. 

A powerful hardware component is essential here: the Neural Processing Unit (NPU). Typical CPUs and GPUs can struggle with AI workloads like large language models and image processing, leading to slowdowns, heat, noise, and shortened battery life. NPUs are specifically designed for handling matrix-heavy computations—vital for AI—and they allow these models to run efficiently right on your laptop, without burdening the main processor. 

Currently, consumer devices such as Intel Core Ultra, Qualcomm Snapdragon X Elite, and Apple’s M-series chips (M1–M4) come equipped with NPUs built for this purpose. With one of these devices, you can run open-source AI models like DeepSeek‑R1, Qwen 3, or LLaMA 3.3 using tools such as Ollama, which supports Windows, macOS, and Linux. By pairing Ollama with a user-friendly interface like OpenWeb UI, you can replicate the experience of cloud chatbots entirely offline.  

Other local tools like GPT4All and Jan.ai also provide convenient interfaces for running AI models locally. However, be aware that model files can be quite large (often 20 GB or more), and without NPU support, performance may be sluggish and battery life will suffer.  

Using AI locally comes with several key advantages. You gain full control over your data, knowing it’s never sent to external servers. Offline compatibility ensures uninterrupted use, even in remote or unstable network environments. In terms of responsiveness, local AI often outperforms cloud models due to the absence of network latency. Many tools are open source, making experimentation and customization financially accessible. Lastly, NPUs offer energy-efficient performance, enabling richer AI experiences on everyday devices. 

In summary, if you’re looking for a faster, more private, and reliable AI workflow that doesn’t depend on the internet, equipping your laptop with an NPU and installing tools like Ollama, OpenWeb UI, GPT4All, or Jan.ai is a smart move. Not only will your interactions be quick and seamless, but they’ll also remain securely under your control.