Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI Chatbots. Show all posts

Britons Risk Privacy by Sharing Sensitive Data with AI Chatbots Despite Security Concerns

 

Nearly one in three individuals in the UK admits to sharing confidential personal details with AI chatbots, such as OpenAI’s ChatGPT, according to new research by cybersecurity firm NymVPN. The study reveals that 30% of Britons have disclosed sensitive data—including banking information and health records—to AI tools, potentially endangering their own privacy and that of others.

Despite 48% of respondents expressing concerns over the safety of AI chatbots, many continue to reveal private details. This habit extends to professional settings, where employees are reportedly sharing internal company and customer information with these platforms.

The findings come amid a wave of high-profile cyberattacks, including the recent breach at Marks & Spencer, which underscores how easily confidential data can be compromised. NymVPN reports that 26% of survey participants have entered financial details related to salaries, mortgages, and investments, while 18% have exposed credit card or bank account numbers. Additionally, 24% acknowledged sharing customer data—such as names and email addresses—and 16% uploaded company financial records and contracts.

“AI tools have rapidly become part of how people work, but we’re seeing a worrying trend where convenience is being prioritized over security,” said Harry Halpin, CEO of NymVPN.

Organizations such as M&S, Co-op, and Adidas have already made headlines for data breaches. “High-profile breaches show how vulnerable even major organizations can be, and the more personal and corporate data that is fed into AI, the bigger the target becomes for cybercriminals,” Halpin added.

With nearly a quarter of people admitting to sharing customer data with AI tools, experts emphasize the urgent need for businesses to establish strict policies governing AI usage at work.

“Employees and businesses urgently need to think about how they’re protecting both personal privacy and company data when using AI tools,” Halpin warned.

Completely avoiding AI chatbots might be the safest option, but it’s not always realistic. Users are advised to refrain from entering sensitive information, adjust privacy settings by disabling chat history, or opt out of model training.

Using a VPN can provide an additional layer of online privacy by encrypting internet traffic and masking IP addresses when accessing AI chatbots like ChatGPT. However, even with a VPN, risks remain if individuals continue to input confidential data.

Why Running AI Locally with an NPU Offers Better Privacy, Speed, and Reliability

 

Running AI applications locally offers a compelling alternative to relying on cloud-based chatbots like ChatGPT, Gemini, or Deepseek, especially for those concerned about data privacy, internet dependency, and speed. Though cloud services promise protections through subscription terms, the reality remains uncertain. In contrast, using AI locally means your data never leaves your device, which is particularly advantageous for professionals handling sensitive customer information or individuals wary of sharing personal data with third parties. 

Local AI eliminates the need for a constant, high-speed internet connection. This reliable offline capability means that even in areas with spotty coverage or during network outages, tools for voice control, image recognition, and text generation remain functional. Lower latency also translates to near-instantaneous responses, unlike cloud AI that may lag due to network round-trip times. 

A powerful hardware component is essential here: the Neural Processing Unit (NPU). Typical CPUs and GPUs can struggle with AI workloads like large language models and image processing, leading to slowdowns, heat, noise, and shortened battery life. NPUs are specifically designed for handling matrix-heavy computations—vital for AI—and they allow these models to run efficiently right on your laptop, without burdening the main processor. 

Currently, consumer devices such as Intel Core Ultra, Qualcomm Snapdragon X Elite, and Apple’s M-series chips (M1–M4) come equipped with NPUs built for this purpose. With one of these devices, you can run open-source AI models like DeepSeek‑R1, Qwen 3, or LLaMA 3.3 using tools such as Ollama, which supports Windows, macOS, and Linux. By pairing Ollama with a user-friendly interface like OpenWeb UI, you can replicate the experience of cloud chatbots entirely offline.  

Other local tools like GPT4All and Jan.ai also provide convenient interfaces for running AI models locally. However, be aware that model files can be quite large (often 20 GB or more), and without NPU support, performance may be sluggish and battery life will suffer.  

Using AI locally comes with several key advantages. You gain full control over your data, knowing it’s never sent to external servers. Offline compatibility ensures uninterrupted use, even in remote or unstable network environments. In terms of responsiveness, local AI often outperforms cloud models due to the absence of network latency. Many tools are open source, making experimentation and customization financially accessible. Lastly, NPUs offer energy-efficient performance, enabling richer AI experiences on everyday devices. 

In summary, if you’re looking for a faster, more private, and reliable AI workflow that doesn’t depend on the internet, equipping your laptop with an NPU and installing tools like Ollama, OpenWeb UI, GPT4All, or Jan.ai is a smart move. Not only will your interactions be quick and seamless, but they’ll also remain securely under your control.

Meta.ai Privacy Lapse Exposes User Chats in Public Feed

 

Meta’s new AI-driven chatbot platform, Meta.ai, launched recently with much fanfare, offering features like text and voice chats, image generation, and video restyling. Designed to rival platforms like ChatGPT, the app also includes a Discover feed, a space intended to showcase public content generated by users. However, what Meta failed to communicate effectively was that many users were unintentionally sharing their private conversations in this feed—sometimes with extremely sensitive content attached. 

In May, journalists flagged the issue when they discovered public chats revealing deeply personal user concerns—ranging from financial issues and health anxieties to legal troubles. These weren’t obscure posts either; they appeared in a publicly accessible area of the app, often containing identifying information. Conversations included users seeking help with medical diagnoses, children talking about personal experiences, and even incarcerated individuals discussing legal strategies—none of whom appeared to realize their data was visible to others. 

Despite some recent tweaks to the app’s sharing settings, disturbing content still appears on the Discover feed. Users unknowingly uploaded images and video clips, sometimes including faces, alongside alarming or bizarre prompts. One especially troubling instance featured a photo of a child at school, accompanied by a prompt instructing the AI to “make him cry.” Such posts reflect not only poor design choices but also raise ethical questions about the purpose and moderation of the Discover feed itself. 

The issue evokes memories of other infamous data exposure incidents, such as AOL’s release of anonymized user searches in 2006, which provided unsettling insight into private thoughts and behaviors. While social media platforms are inherently public, users generally view AI chat interactions as private, akin to using a search engine. Meta.ai blurred that boundary—perhaps unintentionally, but with serious consequences. Many users turned to Meta.ai seeking support, companionship, or simple productivity help. Some asked for help with job listings or obituary writing, while others vented emotional distress or sought comfort during panic attacks. 

In some cases, users left chats expressing gratitude—believing the bot had helped. But a growing number of conversations end in frustration or embarrassment when users realize the bot cannot deliver on its promises or that their content was shared publicly. These incidents highlight a disconnect between how users engage with AI tools and how companies design them. Meta’s ambition to merge AI capabilities with social interaction seems to have ignored the emotional and psychological expectations users bring to private-sounding features. 

For those using Meta.ai as a digital confidant, the lack of clarity around privacy settings has turned an experiment in convenience into a public misstep. As AI systems become more integrated into daily life, companies must rethink how they handle user data—especially when users assume privacy. Meta.ai’s rocky launch serves as a cautionary tale about transparency, trust, and design in the age of generative AI.

AI Agents Raise Cybersecurity Concerns Amid Rapid Enterprise Adoption

 

A growing number of organizations are adopting autonomous AI agents despite widespread concerns about the cybersecurity risks they pose. According to a new global report released by identity security firm SailPoint, this accelerated deployment is happening in a largely unregulated environment. The findings are based on a survey of more than 350 IT professionals, revealing that 84% of respondents said their organizations already use AI agents internally. 

However, only 44% confirmed the presence of any formal policies to regulate the agents’ actions. AI agents differ from traditional chatbots in that they are designed to independently plan and execute tasks without constant human direction. Since the emergence of generative AI tools like ChatGPT in late 2022, major tech companies have been racing to launch their own agents. Many smaller businesses have followed suit, motivated by the desire for operational efficiency and the pressure to adopt what is widely viewed as a transformative technology.  

Despite this enthusiasm, 96% of survey participants acknowledged that these autonomous systems pose security risks, while 98% stated their organizations plan to expand AI agent usage within the next year. The report warns that these agents often have extensive access to sensitive systems and information, making them a new and significant attack surface for cyber threats. Chandra Gnanasambandam, SailPoint’s Executive Vice President of Product and Chief Technology Officer, emphasized the risks associated with such broad access. He explained that these systems are transforming workflows but typically operate with minimal oversight, which introduces serious vulnerabilities. 

Further compounding the issue is the inconsistent implementation of governance controls. Although 92% of those surveyed agree that AI agents should be governed similarly to human employees, 80% reported incidents where agents performed unauthorized actions or accessed restricted data. These incidents underscore the dangers of deploying autonomous systems without robust monitoring or access controls. 

Gnanasambandam suggests adopting an identity-first approach to agent management. He recommends applying the same security protocols used for human users, including real-time access permissions, least privilege principles, and comprehensive activity tracking. Without such measures, organizations risk exposing themselves to breaches or data misuse due to the very tools designed to streamline operations. 

As AI agents become more deeply embedded in business processes, experts caution that failing to implement adequate oversight could create long-term vulnerabilities. The report serves as a timely reminder that innovation must be accompanied by strong governance to ensure cybersecurity is not compromised in the pursuit of automation.

Why Microsoft Says DeepSeek Is Too Dangerous to Use

 


Microsoft has openly said that its workers are not allowed to use the DeepSeek app. This announcement came from Brad Smith, the company’s Vice Chairman and President, during a recent hearing in the U.S. Senate. He said the decision was made because of serious concerns about user privacy and the risk of biased content being shared through the app.

According to Smith, Microsoft does not allow DeepSeek on company devices and hasn’t included the app in its official store either. Although other organizations and even governments have taken similar steps, this is the first time Microsoft has spoken publicly about such a restriction.

The main worry is where the app stores user data. DeepSeek's privacy terms say that all user information is saved on servers based in China. This is important because Chinese laws require companies to hand over data if asked by the government. That means any data stored through DeepSeek could be accessed by Chinese authorities.

Another major issue is how the app answers questions. It’s been noted that DeepSeek avoids topics that the Chinese government sees as sensitive. This has led to fears that the app’s responses might be influenced by government-approved messaging instead of being neutral or fact-based.

Interestingly, even though Microsoft is blocking the app itself, it did allow DeepSeek’s AI model—called R1—to be used through its Azure cloud service earlier this year. But that version works differently. Developers can download it and run it on their own servers without sending any data back to China. This makes it more secure, at least in terms of data storage.

However, there are still other risks involved. Even if the model is hosted outside China, it might still share biased content or produce low-quality or unsafe code.

At the Senate hearing, Smith added that Microsoft took extra steps to make the model safer before making it available. He said the company made internal changes to reduce any harmful behavior from the model, but didn’t go into detail about what those changes were.

When DeepSeek was first added to Azure, Microsoft said the model had passed safety checks and gone through deep testing to make sure it met company standards.

Some people have pointed out that DeepSeek could be seen as a competitor to Microsoft’s own chatbot, Copilot. But Microsoft doesn’t block every competing chatbot. For example, Perplexity is available in the Windows app store. Still, some other popular apps, like Google’s Chrome browser and its Gemini chatbot, weren’t found during a search of the store.

DeepSeek AI: Benefits, Risks, and Security Concerns for Businesses

 

DeepSeek, an AI chatbot developed by China-based High-Flyer, has gained rapid popularity due to its affordability and advanced natural language processing capabilities. Marketed as a cost-effective alternative to OpenAI’s ChatGPT, DeepSeek has been widely adopted by businesses looking for AI-driven insights. 

However, cybersecurity experts have raised serious concerns over its potential security risks, warning that the platform may expose sensitive corporate data to unauthorized surveillance. Reports suggest that DeepSeek’s code contains embedded links to China Mobile’s CMPassport.com, a registry controlled by the Chinese government. This discovery has sparked fears that businesses using DeepSeek may unknowingly be transferring sensitive intellectual property, financial records, and client communications to external entities. 

Investigative findings have drawn parallels between DeepSeek and TikTok, the latter having faced a U.S. federal ban over concerns regarding Chinese government access to user data. Unlike TikTok, however, security analysts claim to have found direct evidence of DeepSeek’s potential backdoor access, raising further alarms among cybersecurity professionals. Cybersecurity expert Ivan Tsarynny warns that DeepSeek’s digital fingerprinting capabilities could allow it to track users’ web activity even after they close the app. 

This means companies may be exposing not just individual employee data but also internal business strategies and confidential documents. While AI-driven tools like DeepSeek offer substantial productivity gains, business leaders must weigh these benefits against potential security vulnerabilities. A complete ban on DeepSeek may not be the most practical solution, as employees often adopt new AI tools before leadership can fully assess their risks. Instead, organizations should take a strategic approach to AI integration by implementing governance policies that define approved AI tools and security measures. 

Restricting DeepSeek’s usage to non-sensitive tasks such as content brainstorming or customer support automation can help mitigate data security concerns. Enterprises should prioritize the use of vetted AI solutions with stronger security frameworks. Platforms like OpenAI’s ChatGPT Enterprise, Microsoft Copilot, and Claude AI offer greater transparency and data protection. IT teams should conduct regular software audits to monitor unauthorized AI use and implement access restrictions where necessary. 

Employee education on AI risks and cybersecurity threats will also be crucial in ensuring compliance with corporate security policies. As AI technology continues to evolve, so do the challenges surrounding data privacy. Business leaders must remain proactive in evaluating emerging AI tools, balancing innovation with security to protect corporate data from potential exploitation.

AI's Effect on Employment: Dukaan's Divisive Automation Approach

 

As businesses increasingly use AI to do jobs that have historically been managed by human workers, artificial intelligence is permeating several industries. 

Suumit Shah, the CEO of the e-commerce firm Dukaan in India, went to great lengths to automate processes. He made the contentious decision to fire 90% of his employees and replace them with chatbots powered by artificial intelligence in the summer of 2023. 

Though it sparked a heated ethical debate, this drastic measure was meant to lower operating costs and increase efficiency. A year later, Shah has shared his initial assessment of this decision, which he deems a success.

AI-Enhanced Customer Service 

Shah claims that incorporating AI into his organisation has significantly improved customer service. He observes that response times have plummeted from roughly two minutes to near-instantaneous responses. 

Furthermore, the time it takes to handle customer complaints has been dramatically reduced, from more than two hours to only a few minutes. According to him, these innovations have led to increased productivity and a better client experience. 

However, some argue that the human element in customer service cannot be replaced, and that such widespread automation risks setting a troubling precedent for the future employment market.

AI replacing human jobs 

The replacement of human employment by AI has long been a divisive issue, and science fiction frequently depicts a future in which AI dominates the workforce. This topic is gaining traction as AI technology advances and expands its boundaries. 

Some people see the introduction of AI as a positive change, a way to increase productivity by completing repetitive and laborious activities. Others, however, see it as an impending threat, warning that extensive automation could result in massive unemployment and difficulties adjusting to a transformed job environment.

The issue at Dukaan exemplifies how AI is increasingly changing industries. While firms benefit from lower costs and more efficiency, mass layoffs raise serious concerns about the long-term impact on employment. Finding a balance between implementing AI solutions and safeguarding job security is a pressing issue.

The Privacy Risks of ChatGPT and AI Chatbots

 


AI chatbots like ChatGPT have captured widespread attention for their remarkable conversational abilities, allowing users to engage on diverse topics with ease. However, while these tools offer convenience and creativity, they also pose significant privacy risks. The very technology that powers lifelike interactions can also store, analyze, and potentially resurface user data, raising critical concerns about data security and ethical use.

The Data Behind AI's Conversational Skills

Chatbots like ChatGPT rely on Large Language Models (LLMs) trained on vast datasets to generate human-like responses. This training often includes learning from user interactions. Much like how John Connor taught the Terminator quirky catchphrases in Terminator 2: Judgment Day, these systems refine their capabilities through real-world inputs. However, this improvement process comes at a cost: personal data shared during conversations may be stored and analyzed, often without users fully understanding the implications.

For instance, OpenAI’s terms and conditions explicitly state that data shared with ChatGPT may be used to improve its models. Unless users actively opt-out through privacy settings, all shared information—from casual remarks to sensitive details like financial data—can be logged and analyzed. Although OpenAI claims to anonymize and aggregate user data for further study, the risk of unintended exposure remains.

Real-World Privacy Breaches

Despite assurances of data security, breaches have occurred. In May 2023, hackers exploited a vulnerability in ChatGPT’s Redis library, compromising the personal data of around 101,000 users. This breach underscored the risks associated with storing chat histories, even when companies emphasize their commitment to privacy. Similarly, companies like Samsung faced internal crises when employees inadvertently uploaded confidential information to chatbots, prompting some organizations to ban generative AI tools altogether.

Governments and industries are starting to address these risks. For instance, in October 2023, President Joe Biden signed an executive order focusing on privacy and data protection in AI systems. While this marks a step in the right direction, legal frameworks remain unclear, particularly around the use of user data for training AI models without explicit consent. Current practices are often classified as “fair use,” leaving consumers exposed to potential misuse.

Protecting Yourself in the Absence of Clear Regulations

Until stricter regulations are implemented, users must take proactive steps to safeguard their privacy while interacting with AI chatbots. Here are some key practices to consider:

  1. Avoid Sharing Sensitive Information
    Treat chatbots as advanced algorithms, not confidants. Avoid disclosing personal, financial, or proprietary information, no matter how personable the AI seems.
  2. Review Privacy Settings
    Many platforms offer options to opt out of data collection. Regularly review and adjust these settings to limit the data shared with AI