Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Data Privacy. Show all posts

SABO Fashion Brand Exposes 3.5 Million Customer Records in Major Data Leak

 

Australian fashion retailer SABO recently faced a significant data breach that exposed sensitive personal information of millions of customers. The incident came to light when cybersecurity researcher Jeremiah Fowler discovered an unsecured database containing over 3.5 million PDF documents, totaling 292 GB in size. The database, which had no password protection or encryption, was publicly accessible online to anyone who knew where to look. 

The leaked records included a vast amount of personally identifiable information (PII), such as names, physical addresses, phone numbers, email addresses, and other order-related data of both retail and business clients. According to Fowler, the actual number of affected individuals could be substantially higher than the number of files. He observed that a single PDF file sometimes contained details from up to 50 separate orders, suggesting that the total number of exposed customer profiles might exceed 3.5 million. 

The information was derived from SABO’s internal document management system used for handling sales, returns, and shipping data—both within Australia and internationally. The files dated back to 2015 and stretched through to 2025, indicating a mix of outdated and still-relevant information that could pose risks if misused. Upon discovering the open database, Fowler immediately notified the company. SABO responded by securing the exposed data within a few hours. 

However, the brand did not reply to the researcher’s inquiries, leaving critical questions unanswered—such as how long the data remained vulnerable, who was responsible for managing the server, and whether malicious actors accessed the database before it was locked. SABO, known for its stylish collections of clothing, swimwear, footwear, and formalwear, operates three physical stores in Australia and also ships products globally through its online platform. 

In 2024, the brand reported annual revenue of approximately $18 million, underscoring its scale and reach in the retail space. While SABO has taken action to secure the exposed data, the breach underscores ongoing challenges in cybersecurity, especially among mid-sized e-commerce businesses. Data left unprotected on the internet can be quickly exploited, and even short windows of exposure can have lasting consequences for customers. 

The lack of transparency following the discovery only adds to growing concerns about how companies handle consumer data and whether they are adequately prepared to respond to digital threats.

Britons Risk Privacy by Sharing Sensitive Data with AI Chatbots Despite Security Concerns

 

Nearly one in three individuals in the UK admits to sharing confidential personal details with AI chatbots, such as OpenAI’s ChatGPT, according to new research by cybersecurity firm NymVPN. The study reveals that 30% of Britons have disclosed sensitive data—including banking information and health records—to AI tools, potentially endangering their own privacy and that of others.

Despite 48% of respondents expressing concerns over the safety of AI chatbots, many continue to reveal private details. This habit extends to professional settings, where employees are reportedly sharing internal company and customer information with these platforms.

The findings come amid a wave of high-profile cyberattacks, including the recent breach at Marks & Spencer, which underscores how easily confidential data can be compromised. NymVPN reports that 26% of survey participants have entered financial details related to salaries, mortgages, and investments, while 18% have exposed credit card or bank account numbers. Additionally, 24% acknowledged sharing customer data—such as names and email addresses—and 16% uploaded company financial records and contracts.

“AI tools have rapidly become part of how people work, but we’re seeing a worrying trend where convenience is being prioritized over security,” said Harry Halpin, CEO of NymVPN.

Organizations such as M&S, Co-op, and Adidas have already made headlines for data breaches. “High-profile breaches show how vulnerable even major organizations can be, and the more personal and corporate data that is fed into AI, the bigger the target becomes for cybercriminals,” Halpin added.

With nearly a quarter of people admitting to sharing customer data with AI tools, experts emphasize the urgent need for businesses to establish strict policies governing AI usage at work.

“Employees and businesses urgently need to think about how they’re protecting both personal privacy and company data when using AI tools,” Halpin warned.

Completely avoiding AI chatbots might be the safest option, but it’s not always realistic. Users are advised to refrain from entering sensitive information, adjust privacy settings by disabling chat history, or opt out of model training.

Using a VPN can provide an additional layer of online privacy by encrypting internet traffic and masking IP addresses when accessing AI chatbots like ChatGPT. However, even with a VPN, risks remain if individuals continue to input confidential data.

Episource Healthcare Data Breach Exposes Personal Data of 5.4 Million Americans

 

In early 2025, a cyberattack targeting healthcare technology provider Episource compromised the personal and medical data of over 5.4 million individuals in the United States. Though not widely known to the public, Episource plays a critical role in the healthcare ecosystem by offering medical coding, risk adjustment, and data analytics services to major providers. This makes it a lucrative target for hackers seeking access to vast troves of sensitive information. 

The breach took place between January 27 and February 6. During this time, attackers infiltrated the company’s systems and extracted confidential data, including names, addresses, contact details, Social Security numbers, insurance information, Medicaid IDs, and medical records. Fortunately, no banking or payment card information was exposed in the incident. The U.S. Department of Health and Human Services reported the breach’s impact affected over 5.4 million people. 

What makes this breach particularly concerning is that many of those affected likely had no direct relationship with Episource, as the company operates in the background of the healthcare system. Its partnerships with insurers and providers mean it routinely processes massive volumes of personal data, leaving millions exposed when its security infrastructure fails. 

Episource responded to the breach by notifying law enforcement, launching an internal investigation, and hiring third-party cybersecurity experts. In April, the company began sending out physical letters to affected individuals explaining what data may have been exposed and offering free credit monitoring and identity restoration services through IDX. These notifications are being issued by traditional mail rather than email, in keeping with standard procedures for health-related data breaches. 

The long-term implications of this incident go beyond individual identity theft. The nature of the data stolen — particularly medical and insurance records combined with Social Security numbers — makes those affected highly vulnerable to fraud and phishing schemes. With full profiles of patients in hand, cybercriminals can carry out advanced impersonation attacks, file false insurance claims, or apply for loans in someone else’s name. 

This breach underscores the growing need for stronger cybersecurity across the healthcare industry, especially among third-party service providers. While Episource is offering identity protection to affected users, individuals must remain cautious by monitoring accounts, being wary of unknown communications, and considering a credit freeze as a precaution. As attacks on healthcare entities become more frequent, robust data security is no longer optional — it’s essential for maintaining public trust and protecting sensitive personal information.

Balancing Accountability and Privacy in the Age of Work Tracking Software

 

As businesses adopt employee monitoring tools to improve output and align team goals, they must also consider the implications for privacy. The success of these systems doesn’t rest solely on data collection, but on how transparently and respectfully they are implemented. When done right, work tracking software can enhance productivity while preserving employee dignity and fostering a culture of trust. 

One of the strongest arguments for using tracking software lies in the visibility it offers. In hybrid and remote work settings, where face-to-face supervision is limited, these tools offer leaders critical insights into workflows, project progress, and resource allocation. They enable more informed decisions and help identify process inefficiencies that could otherwise remain hidden. At the same time, they give employees the opportunity to highlight their own efforts, especially in collaborative environments where individual contributions can easily go unnoticed. 

For workers, having access to objective performance data ensures that their time and effort are acknowledged. Instead of constant managerial oversight, employees can benefit from automated insights that help them manage their time more effectively. This reduces the need for frequent check-ins and allows greater autonomy in daily schedules, ultimately leading to better focus and outcomes. 

However, the ethical use of these tools requires more than functionality—it demands transparency. Companies must clearly communicate what is being monitored, why it’s necessary, and how the collected data will be used. Monitoring practices should be limited to work-related metrics like app usage or project activity and should avoid invasive methods such as covert screen recording or keystroke logging. When employees are informed and involved from the start, they are more likely to accept the tools as supportive rather than punitive. 

Modern tracking platforms often go beyond timekeeping. Many offer dashboards that enable employees to view their own productivity patterns, identify distractions, and make self-directed improvements. This shift from oversight to insight empowers workers and contributes to their personal and professional development. At the organizational level, this data can guide strategy, uncover training needs, and drive better resource distribution—without compromising individual privacy. 

Ultimately, integrating work tracking tools responsibly is less about trade-offs and more about fostering mutual respect. The most successful implementations are those that treat transparency as a priority, not an afterthought. By framing these tools as resources for growth rather than surveillance, organizations can reinforce trust while improving overall performance. 

Used ethically and with clear communication, work tracking software has the potential to unify rather than divide. It supports both the operational needs of businesses and the autonomy of employees, proving that accountability and privacy can, in fact, coexist.

Germany’s Warmwind May Be the First True AI Operating System — But It’s Not What You Expect

 



Artificial intelligence is starting to change how we interact with computers. Since advanced chatbots like ChatGPT gained popularity, the idea of AI systems that can understand natural language and perform tasks for us has been gaining ground. Many have imagined a future where we simply tell our computer what to do, and it just gets done, like the assistants we’ve seen in science fiction movies.

Tech giants like OpenAI, Google, and Apple have already taken early steps. AI tools can now understand voice commands, control some apps, and even help automate tasks. But while these efforts are still in progress, the first real AI operating system appears to be coming from a small German company called Jena, not from Silicon Valley.

Their product is called Warmwind, and it’s currently in beta testing. Though it’s not widely available yet, over 12,000 people have already joined the waitlist to try it.


What exactly is Warmwind?

Warmwind is an AI-powered system designed to work like a “digital employee.” Instead of being a voice assistant or chatbot, Warmwind watches how users perform digital tasks like filling out forms, creating reports, or managing software, and then learns to do those tasks itself. Once trained, it can carry out the same work over and over again without any help.

Unlike traditional operating systems, Warmwind doesn’t run on your computer. It operates remotely through cloud servers based in Germany, following the strict privacy rules under the EU’s GDPR. You access it through your browser, but the system keeps running even if you close the window.

The AI behaves much like a person using a computer. It clicks buttons, types, navigates through screens, and reads information — all without needing special APIs or coding integrations. In short, it automates your digital tasks the same way a human would, but much faster and without tiring.

Warmwind is mainly aimed at businesses that want to reduce time spent on repetitive computer work. While it’s not the futuristic AI companion from the movies, it’s a step in that direction, making software more hands-free and automated.

Technically, Warmwind runs on a customized version of Linux built specifically for automation. It uses remote streaming technology to show you the user interface while the AI works in the background.

Jena, the company behind Warmwind, says calling it an “AI operating system” is symbolic. The name helps people understand the concept quickly, it’s an operating system, not for people, but for digital AI workers.

While it’s still early days for AI OS platforms, Warmwind might be showing us what the future of work could look like, where computers no longer wait for instructions but get things done on their own.

Why Running AI Locally with an NPU Offers Better Privacy, Speed, and Reliability

 

Running AI applications locally offers a compelling alternative to relying on cloud-based chatbots like ChatGPT, Gemini, or Deepseek, especially for those concerned about data privacy, internet dependency, and speed. Though cloud services promise protections through subscription terms, the reality remains uncertain. In contrast, using AI locally means your data never leaves your device, which is particularly advantageous for professionals handling sensitive customer information or individuals wary of sharing personal data with third parties. 

Local AI eliminates the need for a constant, high-speed internet connection. This reliable offline capability means that even in areas with spotty coverage or during network outages, tools for voice control, image recognition, and text generation remain functional. Lower latency also translates to near-instantaneous responses, unlike cloud AI that may lag due to network round-trip times. 

A powerful hardware component is essential here: the Neural Processing Unit (NPU). Typical CPUs and GPUs can struggle with AI workloads like large language models and image processing, leading to slowdowns, heat, noise, and shortened battery life. NPUs are specifically designed for handling matrix-heavy computations—vital for AI—and they allow these models to run efficiently right on your laptop, without burdening the main processor. 

Currently, consumer devices such as Intel Core Ultra, Qualcomm Snapdragon X Elite, and Apple’s M-series chips (M1–M4) come equipped with NPUs built for this purpose. With one of these devices, you can run open-source AI models like DeepSeek‑R1, Qwen 3, or LLaMA 3.3 using tools such as Ollama, which supports Windows, macOS, and Linux. By pairing Ollama with a user-friendly interface like OpenWeb UI, you can replicate the experience of cloud chatbots entirely offline.  

Other local tools like GPT4All and Jan.ai also provide convenient interfaces for running AI models locally. However, be aware that model files can be quite large (often 20 GB or more), and without NPU support, performance may be sluggish and battery life will suffer.  

Using AI locally comes with several key advantages. You gain full control over your data, knowing it’s never sent to external servers. Offline compatibility ensures uninterrupted use, even in remote or unstable network environments. In terms of responsiveness, local AI often outperforms cloud models due to the absence of network latency. Many tools are open source, making experimentation and customization financially accessible. Lastly, NPUs offer energy-efficient performance, enabling richer AI experiences on everyday devices. 

In summary, if you’re looking for a faster, more private, and reliable AI workflow that doesn’t depend on the internet, equipping your laptop with an NPU and installing tools like Ollama, OpenWeb UI, GPT4All, or Jan.ai is a smart move. Not only will your interactions be quick and seamless, but they’ll also remain securely under your control.

Can AI Be Trusted With Sensitive Business Data?

 



As artificial intelligence becomes more common in businesses, from retail to finance to technology— it’s helping teams make faster decisions. But behind these smart predictions is a growing problem: how do you make sure employees only see what they’re allowed to, especially when AI mixes information from many different places?

Take this example: A retail company’s AI tool predicts upcoming sales trends. To do this, it uses both public market data and private customer records. The output looks clean and useful but what if that forecast is shown to someone who isn’t supposed to access sensitive customer details? That’s where access control becomes tricky.


Why Traditional Access Rules Don’t Work for AI

In older systems, access control was straightforward. Each person had certain permissions: developers accessed code, managers viewed reports, and so on. But AI changes the game. These systems pull data from multiple sources, internal files, external APIs, sensor feeds, and combine everything to create insights. That means even if a person only has permission for public data, they might end up seeing results that are based, in part, on private or restricted information.


Why It Matters

Security Concerns: If sensitive data ends up in the wrong hands even indirectly, it can lead to data leaks. A 2025 study showed that over two-thirds of companies had AI-related security issues due to weak access controls.

Legal Risks: Privacy laws like the GDPR require clear separation of data. If a prediction includes restricted inputs and is shown to the wrong person, companies can face heavy fines.

Trust Issues: When employees or clients feel their data isn’t safe, they lose trust in the system, and the business.


What’s Making This So Difficult?

1. AI systems often blend data so deeply that it’s hard to tell what came from where.

2. Access rules are usually fixed, but AI relies on fast-changing data.

3. Companies have many users with different roles and permissions, making enforcement complicated.

4. Permissions are often too broad, for example, someone allowed to "view reports" might accidentally access sensitive content.


How Can Businesses Fix This?

• Track Data Origins: Label data as "public" or "restricted" and monitor where it ends up.

• Flexible Access Rules: Adjust permissions based on user roles and context.

• Filter Outputs: Build AI to hide or mask parts of its response that come from private sources.

• Separate Models: Train different AI models for different user groups, each with its own safe data.

• Monitor Usage: Keep logs of who accessed what, and use alerts to catch suspicious activity.


As AI tools grow more advanced and rely on live data from many sources, managing access will only get harder. Businesses must modernize their security strategies to protect sensitive information without slowing down innovation.

Horizon Healthcare RCM Reports Ransomware Breach Impacting Patient Data

 

Horizon Healthcare RCM has confirmed it was the target of a ransomware attack involving the theft of sensitive health information, making it the latest revenue cycle management (RCM) vendor to report such a breach. Based on the company’s breach disclosure, it appears a ransom may have been paid to prevent the public release of stolen data. 

In a report filed with Maine’s Attorney General on June 27, Horizon disclosed that six state residents were impacted but did not provide a total number of affected individuals. As of Monday, the U.S. Department of Health and Human Services’ Office for Civil Rights had not yet listed the incident on its breach portal, which logs healthcare data breaches affecting 500 or more people.  

However, the scope of the incident may be broader. It remains unclear whether Horizon is notifying patients directly on behalf of these clients or whether each will report the breach independently. 

In a public notice, Horizon explained that the breach was first detected on December 27, 2024, when ransomware locked access to some files. While systems were later restored, the company determined that certain data had also been copied without permission. 

Horizon noted that it “arranged for the responsible party to delete the copied data,” indicating a likely ransom negotiation. Notices are being sent to affected individuals where possible. The compromised data varies, but most records included a Horizon internal number, patient ID, or insurance claims data. 

In some cases, more sensitive details were exposed, such as Social Security numbers, driver’s license or passport numbers, payment card details, or financial account information. Despite the breach, Horizon stated that there have been no confirmed cases of identity theft linked to the incident. 

The matter has been reported to federal law enforcement. Multiple law firms have since announced investigations into the breach, raising the possibility of class-action litigation. This incident follows several high-profile breaches involving other RCM firms in recent months. 

In May, Nebraska-based ALN Medical Management updated a previously filed breach report, raising the number of affected individuals from 501 to over 1.3 million. Similarly, Gryphon Healthcare disclosed in October 2024 that nearly 400,000 people were impacted by a separate attack. 

Most recently, California-based Episource LLC revealed in June that a ransomware incident in February exposed the health information of roughly 5.42 million individuals. That event now ranks as the second-largest healthcare breach in the U.S. so far in 2025. Experts say that RCM vendors continue to be lucrative targets for cybercriminals due to their access to vast stores of healthcare data and their central role in financial operations. 

Bob Maley, Chief Security Officer at Black Kite, noted that targeting these firms offers hackers outsized rewards. “Hitting one RCM provider can affect dozens of healthcare facilities, exposing massive amounts of data and disrupting financial workflows all at once,” he said.  
Maley warned that many of these firms are still operating under outdated cybersecurity models. “They’re stuck in a compliance mindset, treating risk in vague terms. But boards want to know the real-world financial impact,” he said. 

He also emphasized the importance of supply chain transparency. “These vendors play a crucial role for hospitals, but how well do they know their own vendors? Relying on outdated assessments leaves them blind to emerging threats.” 

Maley concluded that until RCM providers prioritize cybersecurity as a business imperative—not just an IT issue—the industry will remain vulnerable to repeating breaches.

Personal AI Agents Could Become Digital Advocates in an AI-Dominated World

 

As generative AI agents proliferate, a new concept is gaining traction: AI entities that act as loyal digital advocates, protecting individuals from overwhelming technological complexity, misinformation, and data exploitation. Experts suggest these personal AI companions could function similarly to service animals—trained not just to assist, but to guard user interests in an AI-saturated world. From scam detection to helping navigate automated marketing and opaque algorithms, these agents would act as user-first shields. 

At a recent Imagination in Action panel, Consumer Reports’ Ginny Fahs explained, “As companies embed AI deeper into commerce, it becomes harder for consumers to identify fair offers or make informed decisions. An AI that prioritizes users’ interests can build trust and help transition toward a more transparent digital economy.” The idea is rooted in giving users agency and control in a system where most AI is built to serve businesses. Panelists—including experts like Dazza Greenwood, Amir Sarhangi, and Tobin South—discussed how loyal, trustworthy AI advocates could reshape personal data rights, online trust, and legal accountability. 

Greenwood drew parallels to early internet-era reforms such as e-signatures and automated contracts, suggesting a similar legal evolution is needed now to govern AI agents. South added that AI agents must be “loyal by design,” ensuring they act within legal frameworks and always prioritize the user. Sarhangi introduced the concept of “Know Your Agent” (KYA), which promotes transparency by tracking the digital footprint of an AI. 

With unique agent wallets and activity histories, bad actors could be identified and held accountable. Fahs described a tool called “Permission Slip,” which automates user requests like data deletion. This form of AI advocacy predates current generative models but shows how user-authorized agents could manage privacy at scale. Agents could also learn from collective behavior. For instance, an AI noting a negative review of a product could share that experience with other agents, building an automated form of word-of-mouth. 

This concept, said panel moderator Sandy Pentland, mirrors how Consumer Reports aggregates user feedback to identify reliable products. South emphasized that cryptographic tools could ensure safe data-sharing without blindly trusting tech giants. He also referenced NANDA, a decentralized protocol from MIT that aims to enable trustworthy AI infrastructure. Still, implementing AI agents raises usability questions. “We want agents to understand nuanced permissions without constantly asking users to approve every action,” Fahs said. 

Getting this right will be crucial to user adoption. Pentland noted that current AI models struggle to align with individual preferences. “An effective agent must represent you—not a demographic group, but your unique values,” he said. Greenwood believes that’s now possible: “We finally have the tools to build AI agents with fiduciary responsibilities.” In closing, South stressed that the real bottleneck isn’t AI capability but structuring and contextualizing information properly. “If you want AI to truly act on your behalf, we must design systems that help it understand you.” 

As AI becomes deeply embedded in daily life, building personalized, privacy-conscious agents may be the key to ensuring technology serves people—not the other way around.

New Report Ranks Best And Worst Generative AI Tools For Privacy

 

Most generative AI companies use client data to train their chatbots. For this, they may use private or public data. Some services take a more flexible and non-intrusive approach to gathering customer data. Not so much for others. A recent analysis from data removal firm Incogni weighs the benefits and drawbacks of AI in terms of protecting your personal data and privacy.

As part of its "Gen AI and LLM Data Privacy Ranking 2025," Incogni analysed nine well-known generative AI services and evaluated their data privacy policies using 11 distinct factors. The following queries were addressed by the criteria: 

  • What kind of data do the models get trained on? 
  • Is it possible to train the models using user conversations? 
  • Can non-service providers or other appropriate entities receive prompts? 
  • Can the private data from users be erased from the training dataset?
  • How clear is it when training is done via prompts? 
  • How simple is it to locate details about the training process of models? 
  • Does the data collection process have a clear privacy policy?
  • How easy is it to read the privacy statement? 
  • Which resources are used to gather information about users?
  • Are third parties given access to the data? 
  • What information are gathered by the AI apps? 

The research involved Mistral AI's Le Chat, OpenAI's ChatGPT, xAI's Grok, Anthropic's Claude, Inflection AI's Pi, DeekSeek, Microsoft Copilot, Google Gemini, and Meta AI. Each AI performed well on certain questions but not so well on others. 

For instance, Grok performed poorly on the readability of its privacy policy but received a decent rating for how clearly it communicates that prompts are used for training. As another example, the ratings that ChatGPT and Gemini received for gathering data from their mobile apps varied significantly between the iOS and Android versions.

However, Le Chat emerged as the best privacy-friendly AI service overall. It did well in the transparency category, despite losing a few points. Additionally, it only collects a small amount of data and achieves excellent scores for additional privacy concerns unique to AI. 

Second place went to ChatGPT. Researchers at Incogni were a little worried about how user data interacts with the service and how OpenAI trains its models. However, ChatGPT explains the company's privacy standards in detail, lets you know what happens to your data, and gives you explicit instructions on how to restrict how your data is used. Claude and PI came in third and fourth, respectively, after Grok. Each performed reasonably well in terms of protecting user privacy overall, while there were some issues in certain areas. 

"Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind," Incogni noted in its report. "These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models. ChatGPT turned out to be the most transparent about whether prompts will be used for model training and had a clear privacy policy.” 

In its investigation, Incogni discovered that AI firms exchange data with a variety of parties, including service providers, law enforcement, members of the same corporate group, research partners, affiliates, and third parties. 

"Microsoft's privacy policy implies that user prompts may be shared with 'third parties that perform online advertising services for Microsoft or that use Microsoft's advertising technologies,'" Incogni added in the report. "DeepSeek's and Meta's privacy policies indicate that prompts can be shared with companies within its corporate group. Meta's and Anthropic's privacy policies can reasonably be understood to indicate that prompts are shared with research collaborators.” 

You can prevent the models from being trained using your prompts with some providers. This is true for Grok, Mistral AI, Copilot, and ChatGPT. However, based on their privacy rules and other resources, it appears that other services do not allow this kind of data collecting to be stopped. Gemini, DeepSeek, Pi AI, and Meta AI are a few of these. In response to this concern, Anthropic stated that it never gathers user input for model training. 

Ultimately, a clear and understandable privacy policy significantly helps in assisting you in determining what information is being gathered and how to opt out.

WhatsApp Ads Delayed in EU as Meta Faces Privacy Concerns

 

Meta recently introduced in-app advertisements within WhatsApp for users across the globe, marking the first time ads have appeared on the messaging platform. However, this change won’t affect users in the European Union just yet. According to the Irish Data Protection Commission (DPC), WhatsApp has informed them that ads will not be launched in the EU until sometime in 2026. 

Previously, Meta had stated that the feature would gradually roll out over several months but did not provide a specific timeline for European users. The newly introduced ads appear within the “Updates” tab on WhatsApp, specifically inside Status posts and the Channels section. Meta has stated that the ad system is designed with privacy in mind, using minimal personal data such as location, language settings, and engagement with content. If a user has linked their WhatsApp with the Meta Accounts Center, their ad preferences across Instagram and Facebook will also inform what ads they see. 

Despite these assurances, the integration of data across platforms has raised red flags among privacy advocates and European regulators. As a result, the DPC plans to review the advertising model thoroughly, working in coordination with other EU privacy authorities before approving a regional release. Des Hogan, Ireland’s Data Protection Commissioner, confirmed that Meta has officially postponed the EU launch and that discussions with the company will continue to assess the new ad approach. 

Dale Sunderland, another commissioner at the DPC, emphasized that the process remains in its early stages and it’s too soon to identify any potential regulatory violations. The commission intends to follow its usual review protocol, which applies to all new features introduced by Meta. This strategic move by Meta comes while the company is involved in a high-profile antitrust case in the United States. The lawsuit seeks to challenge Meta’s ownership of WhatsApp and Instagram and could potentially lead to a forced breakup of the company’s assets. 

Meta’s decision to push forward with deeper cross-platform ad integration may indicate confidence in its legal position. The tech giant continues to argue that its advertising tools are essential for small business growth and that any restrictions on its ad operations could negatively impact entrepreneurs who rely on Meta’s platforms for customer outreach. However, critics claim this level of integration is precisely why Meta should face stricter regulatory oversight—or even be broken up. 

As the U.S. court prepares to issue a ruling, the EU delay illustrates how Meta is navigating regulatory pressures differently across markets. After initial reporting, WhatsApp clarified that the 2025 rollout in the EU was never confirmed, and the current plan reflects ongoing conversations with European regulators.

Meta.ai Privacy Lapse Exposes User Chats in Public Feed

 

Meta’s new AI-driven chatbot platform, Meta.ai, launched recently with much fanfare, offering features like text and voice chats, image generation, and video restyling. Designed to rival platforms like ChatGPT, the app also includes a Discover feed, a space intended to showcase public content generated by users. However, what Meta failed to communicate effectively was that many users were unintentionally sharing their private conversations in this feed—sometimes with extremely sensitive content attached. 

In May, journalists flagged the issue when they discovered public chats revealing deeply personal user concerns—ranging from financial issues and health anxieties to legal troubles. These weren’t obscure posts either; they appeared in a publicly accessible area of the app, often containing identifying information. Conversations included users seeking help with medical diagnoses, children talking about personal experiences, and even incarcerated individuals discussing legal strategies—none of whom appeared to realize their data was visible to others. 

Despite some recent tweaks to the app’s sharing settings, disturbing content still appears on the Discover feed. Users unknowingly uploaded images and video clips, sometimes including faces, alongside alarming or bizarre prompts. One especially troubling instance featured a photo of a child at school, accompanied by a prompt instructing the AI to “make him cry.” Such posts reflect not only poor design choices but also raise ethical questions about the purpose and moderation of the Discover feed itself. 

The issue evokes memories of other infamous data exposure incidents, such as AOL’s release of anonymized user searches in 2006, which provided unsettling insight into private thoughts and behaviors. While social media platforms are inherently public, users generally view AI chat interactions as private, akin to using a search engine. Meta.ai blurred that boundary—perhaps unintentionally, but with serious consequences. Many users turned to Meta.ai seeking support, companionship, or simple productivity help. Some asked for help with job listings or obituary writing, while others vented emotional distress or sought comfort during panic attacks. 

In some cases, users left chats expressing gratitude—believing the bot had helped. But a growing number of conversations end in frustration or embarrassment when users realize the bot cannot deliver on its promises or that their content was shared publicly. These incidents highlight a disconnect between how users engage with AI tools and how companies design them. Meta’s ambition to merge AI capabilities with social interaction seems to have ignored the emotional and psychological expectations users bring to private-sounding features. 

For those using Meta.ai as a digital confidant, the lack of clarity around privacy settings has turned an experiment in convenience into a public misstep. As AI systems become more integrated into daily life, companies must rethink how they handle user data—especially when users assume privacy. Meta.ai’s rocky launch serves as a cautionary tale about transparency, trust, and design in the age of generative AI.

PocketPal AI Brings Offline AI Chatbot Experience to Smartphones With Full Data Privacy

 

In a digital world where most AI chatbots rely on cloud computing and constant internet connectivity, PocketPal AI takes a different approach by offering an entirely offline, on-device chatbot experience. This free app brings AI processing power directly onto your smartphone, eliminating the need to send data back and forth across the internet. Conventional AI chatbots typically transmit your interactions to distant servers, where the data is processed before a response is returned. That means even sensitive or routine conversations can be stored remotely, raising concerns about privacy, data usage, and the potential for misuse.

PocketPal AI flips this model by handling all computation on your device, ensuring your data never leaves your phone unless you explicitly choose to save or share it. This local processing model is especially useful in areas with unreliable internet or no access at all. Whether you’re traveling in rural regions, riding the metro, or flying, PocketPal AI works seamlessly without needing a connection. 

Additionally, using an AI offline helps reduce mobile data consumption and improves speed, since there’s no delay waiting for server responses. The app is available on both iOS and Android and offers users the ability to interact with compact but capable language models. While you do need an internet connection during the initial setup to download a language model, once that’s done, PocketPal AI functions completely offline. To begin, users select a model from the app’s library or upload one from their device or from the Hugging Face community. 

Although the app lists models without detailed descriptions, users can consult external resources to understand which model is best for their needs—whether it’s from Meta, Microsoft, or another developer. After downloading a model—most of which are several gigabytes in size—users simply tap “Load” to activate the model, enabling conversations with their new offline assistant. 

For those more technically inclined, PocketPal AI includes advanced settings for switching between models, adjusting inference behavior, and testing performance. While these features offer great flexibility, they’re likely best suited for power users. On high-end devices like the Pixel 9 Pro Fold, PocketPal AI runs smoothly and delivers fast responses. 

However, older or budget devices may face slower load times or stuttering performance due to limited memory and processing power. Because offline models must be optimized for device constraints, they tend to be smaller in size and capabilities compared to cloud-based systems. As a result, while PocketPal AI handles common queries, light content generation, and basic conversations well, it may not match the contextual depth and complexity of large-scale models hosted in the cloud. 

Even with these trade-offs, PocketPal AI offers a powerful solution for users seeking AI assistance without sacrificing privacy or depending on an internet connection. It delivers a rare combination of utility, portability, and data control in today’s cloud-dominated AI ecosystem. 

As privacy awareness and concerns about centralized data storage continue to grow, PocketPal AI represents a compelling alternative—one that puts users back in control of their digital interactions, no matter where they are.

Google’s AI Virtual Try-On Tool Redefines Online Shopping Experience

 

At the latest Google I/O developers conference, the tech giant introduced an unexpected innovation in online shopping: an AI-powered virtual try-on tool. This new feature lets users upload a photo of themselves and see how clothing items would appear on their body. By merging the image of the user with that of the garment, Google’s custom-built image generation model creates a realistic simulation of the outfit on the individual. 

While the concept seems simple, the underlying AI technology is advanced. In a live demonstration, the tool appeared to function seamlessly. The feature is now available in the United States and is part of Google’s broader efforts to enhance the online shopping experience through AI integration. It’s particularly useful for people who often struggle to visualize how clothing will look on their body compared to how it appears on models.  

However, the rollout of this tool raised valid questions about user privacy. AI systems that involve personal images often come with concerns over data usage. Addressing these worries, a Google representative clarified that uploaded photos are used exclusively for the try-on experience. The images are not stored for AI training, are not shared with other services or third parties, and users can delete or update their photos at any time. This level of privacy protection is notable in an industry where user data is typically leveraged to improve algorithms. 

Given Google’s ongoing development of AI-driven tools, some expected the company to utilize this photo data for model training. Instead, the commitment to user privacy in this case suggests a more responsible approach. Virtual fitting technology isn’t entirely new. Retail and tech companies have been exploring similar ideas for years. Amazon, for instance, has experimented with AI tools in its fashion division. Google, however, claims its new tool offers a more in-depth understanding of diverse body types. 

During the presentation, Vidhya Srinivasan, Google’s VP of ads and commerce, emphasized the system’s goal of accommodating different shapes and sizes more effectively. Past AI image tools have faced criticism for lacking diversity and realism. It’s unclear whether Google’s new tool will be more reliable across the board. Nevertheless, their assurance that user images won’t be used to train models helps build trust. 

Although the virtual preview may not always perfectly reflect real-life appearances, this development points to a promising direction for AI in retail. If successful, it could improve customer satisfaction, reduce returns, and make online shopping a more personalized experience.

Beware iPhone Users: Indian Government Issues Urgent Advisory Over Data Theft Risk

 

The Indian government has issued an urgent security warning to iPhone and iPad users, citing major flaws in Apple's iOS and iPadOS software. If not addressed, these vulnerabilities could allow cybercriminals to access sensitive user data or make devices inoperable. The advisory was issued by the Indian Computer Emergency Response Team (CERT-In), which is part of the Ministry of Electronics and Information Technology, and urged users to act immediately.

Apple devices running older versions of iOS (before to 18.3) and iPadOS (prior to 17.7.3 or 18.3) are particularly vulnerable to the security flaws. The iPad Pro (2nd generation and up), iPad 6th generation and later, iPad Air (3rd generation and up), and iPad mini (5th generation and later) are among the popular models that fall within this category, as are the iPhone XS and newer. 

A key aspect of Apple's message system, the Darwin notification system, is one of the major flaws. The vulnerability enables unauthorised apps to send system-level notifications without requiring additional permissions. The device could freeze or crash if it is exploited, necessitating user intervention to restore functionality.

These flaws present serious threats. Hackers could gain access to sensitive information such as personal details, financial information, and so on. In other cases, they could circumvent the device's built-in security protections, running malicious code that jeopardises the system's integrity. In the worst-case situation, a hacker could crash the device, rendering it completely unusable. CERT-In has also confirmed that some of these flaws are actively abused by hackers, emphasising the need for users to act quickly. 

Apple has responded by releasing security upgrades to fix these vulnerabilities. It is highly recommended that impacted users update to the most latest version of iOS or iPadOS on their devices as soon as feasible. To defend against any threats, this update is critical. Additionally, users are cautioned against downloading suspicious or unverified apps as they could act as entry points for malware. It's also critical to monitor any unusual device behaviour as it may be related to a security risk. 

As Apple's footprint in India grows, it is more critical than ever that people remain informed and cautious. Regular software upgrades and sensible, cautious usage patterns are critical for guarding against the growing threat of cyber assaults. iPhone and iPad users can improve the security of their devices and sensitive data by taking proactive measures.

Here's Why Websites Are Offering "Ad-Lite" Premium Subscriptions

 

Some websites allow you to totally remove adverts after subscribing, while others now offer "ad-lite" memberships. However, when you subscribe to ad-supported streaming services, you do not get the best value. 

Not removing all ads

Ads are a significant source of income for many websites, despite the fact that they can be annoying. Additionally, a lot of websites are aware of ad-blockers, so outdated methods may no longer be as effective.

For websites, complete memberships without advertisements are a decent compromise because adverts aren't going away. The website may continue to make money to run while providing users with an ad-free experience. In this case, everybody wins. 

However, ad-lite subscriptions are not always the most cost-effective option. Rather than fully blocking adverts, you do not see personalised ads. While others may disagree, I can't see how this would encourage me to subscribe; I'd rather pay an extra few dollars per month to completely remove them. 

In addition to text-based websites, YouTube has tested a Premium Lite tool. Although not all videos are ad-free, the majority are. Subscribing makes no sense for me if the videos with advertisements are on topics I'm interested in. 

Using personal data 

Many websites will track your behaviour because many advertisements are tailored to your preferences. Advertisers can then use this information to recommend items and services that they believe you would be interested in.

Given that many people have been more concerned about their privacy in recent years, it's reasonable that some may wish to pay money to prevent having their data used. While this is occasionally the case, certain websites may continue to utilise your information even after you subscribe to an ad-lite tier. 

Websites continue to require user information in order to get feedback and improve their services. As a result, your data may still be used in certain scenarios. The key distinction is that it will rarely be used for advertising; while this may be sufficient for some, others may find it more aggravating. It is difficult to avoid being tracked online under any circumstances. You can still be tracked while browsing in incognito or private mode.

Use ad-free version

Many websites with ad-lite tiers also provide totally ad-free versions. When you subscribe to them, you will not receive any personalised or non-personalised advertisements. Furthermore, you frequently get access to exclusive and/or infinite content, allowing you to fully support your preferred publications. Rather than focussing on the price, evaluate how much value you'll gain from subscribing to an ad-free tier. It's usually less expensive than ad-lite. 

Getting an ad-lite membership is essentially the worst of everything you were attempting to avoid. You'll still get adverts, but they'll be less personal. Furthermore, you may see adverts on stuff you appreciate while paying for ad-free access to something you do not care about. It's preferable to pay for the complete version.

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp claims even it can not process private data

WhatsApp has introduced ‘Private Processing,’ a new tech that allows users to use advanced AI features by offloading tasks to privacy-preserving cloud servers, without exposing their chat to Meta. Meta claims even it cannot see the messages while processing them. The system employs encrypted cloud infrastructure and hardware-based isolation without making it visible to anyone, even Meta, or processing data. 

About private processing

For those who decide to use Private Processing, the system works in an obscure verification via the user’s WhatsApp client to confirm the user’s validity. 

Meta claims this system keeps WhatsApp’s end-to-end encryption intact while offering AI features in chats. However, the feature currently applies only to select use cases and excludes Meta’s broader AI deployments, including those used in India’s public service systems.

Private processing employs Trusted Execution Environments (TEEs) — safe virtual machines that use cloud infrastructure to keep AI requests hidden. 

About the system

  • Performs encryption of user requests from the system to the TEE utilizing end-to-end encryption
  • Encrypts user requests from the device to the TEE using end-to-end encryption
  • Restricts storage or logging of messages post-processing
  • Reports logs and binary images for external verification and audits

WhatsApp builds AI through wider privacy concerns

According to Meta, the Private processing is a response to privacy questions around AI and messaging. WhatsApp has now joined other companies like Apple that have introduced confidential AI computing models in the previous year. “To validate our implementation of these and other security principles, independent security researchers will be able to continuously verify our privacy and security architecture and its integrity,” Meta said.

It is similar to Apple’s private cloud computing in terms of public transparency and stateless processing. Currently, however, WhatsApp is using them only for select features. Apple, on the other hand, has declared plans to implement this model throughout all its AI tools, whereas WhatsApp has not made such claims, yet. 

WhatsApp says, “Private Processing uses anonymous credentials to authenticate users over OHTTP. This way, Private Processing can authenticate users to the Private Processing system but remains unable to identify them.”

ProtectEU and VPN Privacy: What the EU Encryption Plan Means for Online Security

 

Texting through SMS is pretty much a thing of the past. Most people today rely on apps like WhatsApp and Signal to share messages, make encrypted calls, or send photos—all under the assumption that our conversations are private. But that privacy could soon be at risk in the EU.

On April 1, 2025, the European Commission introduced a new plan called ProtectEU. Its goal is to create a roadmap for “lawful and effective access to data for law enforcement,” particularly targeting encrypted platforms. While messaging apps are the immediate focus, VPN services might be next. VPNs rely on end-to-end encryption and strict no-log policies to keep users anonymous. However, if ProtectEU leads to mandatory encryption backdoors or expanded data retention rules, that could force VPN providers to change how they operate—or leave the EU altogether. 

Proton VPN’s Head of Public Policy, Jurgita Miseviciute, warns that weakening encryption won’t solve security issues. Instead, she believes it would put users at greater risk, allowing bad actors to exploit the same access points created for law enforcement. Proton is monitoring the plan closely, hoping the EU will consider solutions that protect encryption. Surfshark takes a more optimistic view. Legal Head Gytis Malinauskas says the strategy still lacks concrete policy direction and sees the emphasis on cybersecurity as a potential boost for privacy tools like VPNs. Mullvad VPN isn’t convinced. 

Having fought against earlier EU proposals to scan private chats, Mullvad criticized ProtectEU as a rebranded version of old policies, expressing doubt it will gain wide support. One key concern is data retention. If the EU decides to require VPNs to log user activity, it could fundamentally conflict with their privacy-first design. Denis Vyazovoy of AdGuard VPN notes that such laws could make no-log VPNs unfeasible, prompting providers to exit the EU market—much like what happened in India in 2022. NordVPN adds that the more data retained, the more risk users face from breaches or misuse. 

Even though VPNs aren’t explicitly targeted yet, an EU report has listed them as a challenge to investigations—raising concerns about future regulations. Still, Surfshark sees the current debate as a chance to highlight the legitimate role VPNs play in protecting everyday users. While the future remains uncertain, one thing is clear: the tension between privacy and security is only heating up.

Best Encrypted Messaging Apps: Signal vs Telegram vs WhatsApp Privacy Guide

 

Encrypted messaging apps have become essential tools in the age of cyber threats and surveillance. With rising concerns over data privacy, especially after recent high-profile incidents, users are turning to platforms that offer more secure communication. Among the top contenders are Signal, Telegram, and WhatsApp—each with its own approach to privacy, encryption, and data handling. 

Signal is widely regarded as the gold standard when it comes to messaging privacy. Backed by a nonprofit foundation and funded through grants and donations, Signal doesn’t rely on user data for profit. It collects minimal information—just your phone number—and offers strong on-device privacy controls, like disappearing messages and call relays to mask IP addresses. Being open-source, Signal allows independent audits of its code, ensuring transparency. Even when subpoenaed, the app could only provide limited data like account creation date and last connection, making it a favorite among journalists, whistleblowers, and privacy advocates.  

Telegram offers a broader range of features but falls short on privacy. While it supports end-to-end encryption, this is limited only to its “secret chats,” and not enabled by default in regular messages or public channels. Telegram also stores metadata, such as IP addresses and contact info, and recently updated its privacy policy to allow data sharing with authorities under legal requests. Despite this, it remains popular for public content sharing and large group chats, thanks to its forum-like structure and optional paid features. 

WhatsApp, with over 2 billion users, is the most widely used encrypted messaging app. It employs the same encryption protocol as Signal, ensuring end-to-end protection for chats and calls. However, as a Meta-owned platform, it collects significant user data—including device information, usage logs, and location data. Even people not using WhatsApp can have their data collected via synced contacts. While messages remain encrypted, the amount of metadata stored makes it less privacy-friendly compared to Signal. 

All three apps offer some level of encrypted messaging, but Signal stands out for its minimal data collection, open-source transparency, and commitment to privacy. Telegram provides a flexible chat experience with weaker privacy controls, while WhatsApp delivers strong encryption within a data-heavy ecosystem. Choosing the best encrypted messaging app depends on what you prioritize more: security, features, or convenience.

Apple and Google App Stores Host VPN Apps Linked to China, Face Outrage

Apple and Google App Stores Host VPN Apps Linked to China, Face Outrage

Google (GOOGL) and Apple (AAPL) are under harsh scrutiny after a recent report disclosed that their app stores host VPN applications associated with a Chinese cybersecurity firm, Qihoo 360. The U.S government has blacklisted the firm. The Financial Times reports that 5 VPNs still available to U.S users, such as VPN Proxy master and Turbo VPN, are linked to Qihoo. It was sanctioned in 2020 on the charges of alleged military ties. 

Ilusion of Privacy: VPNs collecting data 

In 2025 alone, three VPN apps have had over a million downloads on Google Play and  Apple’s App Store, suggesting these aren’t small-time apps, Sensor Tower reports. They are advertised as “private browsing” tools, but the VPNs provide the companies with complete user data of their online activity. This is alarming because China’s national security laws mandate that companies give user data if the government demands it. 

Concerns around ownership structures

The intricate web of ownership structures raises important questions; the apps are run by Singapore-based Innovative Connecting, owned by Lemon Seed, a Cayman Islands firm. Qihoo acquired Lemon Seed for $69.9 million in 2020. The company claimed to sell the business months late, but FT reports the China-based team making the applications were still under Qihoo’s umbrella for years. According to FT, a developer said, “You could say that we’re part of them, and you could say we’re not. It’s complicated.”

Amid outrage, Google and Apple respond 

Google said it strives to follow sanctions and remove violators when found. Apple has removed two apps- Snap VPN and Thunder VPN- after FT contacted the business, claiming it follows strict rules on VPN data-sharing.

Privacy scare can damage stock valuations

What Google and Apple face is more than public outage. Investors prioritise data privacy, and regulatory threat has increased, mainly with growing concerns around U.S tech firms’ links to China. If the U.S government gets involved, it can result in stricter rules, fines, and even more app removals. If this happens, shareholders won’t be happy. 

According to FT, “Innovative Connecting said the content of the article was not accurate and declined to comment further. Guangzhou Lianchuang declined to comment. Qihoo and Chen Ningyi did not respond to requests for comment.”