Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Data Privacy. Show all posts

Horizon Healthcare RCM Reports Ransomware Breach Impacting Patient Data

 

Horizon Healthcare RCM has confirmed it was the target of a ransomware attack involving the theft of sensitive health information, making it the latest revenue cycle management (RCM) vendor to report such a breach. Based on the company’s breach disclosure, it appears a ransom may have been paid to prevent the public release of stolen data. 

In a report filed with Maine’s Attorney General on June 27, Horizon disclosed that six state residents were impacted but did not provide a total number of affected individuals. As of Monday, the U.S. Department of Health and Human Services’ Office for Civil Rights had not yet listed the incident on its breach portal, which logs healthcare data breaches affecting 500 or more people.  

However, the scope of the incident may be broader. It remains unclear whether Horizon is notifying patients directly on behalf of these clients or whether each will report the breach independently. 

In a public notice, Horizon explained that the breach was first detected on December 27, 2024, when ransomware locked access to some files. While systems were later restored, the company determined that certain data had also been copied without permission. 

Horizon noted that it “arranged for the responsible party to delete the copied data,” indicating a likely ransom negotiation. Notices are being sent to affected individuals where possible. The compromised data varies, but most records included a Horizon internal number, patient ID, or insurance claims data. 

In some cases, more sensitive details were exposed, such as Social Security numbers, driver’s license or passport numbers, payment card details, or financial account information. Despite the breach, Horizon stated that there have been no confirmed cases of identity theft linked to the incident. 

The matter has been reported to federal law enforcement. Multiple law firms have since announced investigations into the breach, raising the possibility of class-action litigation. This incident follows several high-profile breaches involving other RCM firms in recent months. 

In May, Nebraska-based ALN Medical Management updated a previously filed breach report, raising the number of affected individuals from 501 to over 1.3 million. Similarly, Gryphon Healthcare disclosed in October 2024 that nearly 400,000 people were impacted by a separate attack. 

Most recently, California-based Episource LLC revealed in June that a ransomware incident in February exposed the health information of roughly 5.42 million individuals. That event now ranks as the second-largest healthcare breach in the U.S. so far in 2025. Experts say that RCM vendors continue to be lucrative targets for cybercriminals due to their access to vast stores of healthcare data and their central role in financial operations. 

Bob Maley, Chief Security Officer at Black Kite, noted that targeting these firms offers hackers outsized rewards. “Hitting one RCM provider can affect dozens of healthcare facilities, exposing massive amounts of data and disrupting financial workflows all at once,” he said.  
Maley warned that many of these firms are still operating under outdated cybersecurity models. “They’re stuck in a compliance mindset, treating risk in vague terms. But boards want to know the real-world financial impact,” he said. 

He also emphasized the importance of supply chain transparency. “These vendors play a crucial role for hospitals, but how well do they know their own vendors? Relying on outdated assessments leaves them blind to emerging threats.” 

Maley concluded that until RCM providers prioritize cybersecurity as a business imperative—not just an IT issue—the industry will remain vulnerable to repeating breaches.

Personal AI Agents Could Become Digital Advocates in an AI-Dominated World

 

As generative AI agents proliferate, a new concept is gaining traction: AI entities that act as loyal digital advocates, protecting individuals from overwhelming technological complexity, misinformation, and data exploitation. Experts suggest these personal AI companions could function similarly to service animals—trained not just to assist, but to guard user interests in an AI-saturated world. From scam detection to helping navigate automated marketing and opaque algorithms, these agents would act as user-first shields. 

At a recent Imagination in Action panel, Consumer Reports’ Ginny Fahs explained, “As companies embed AI deeper into commerce, it becomes harder for consumers to identify fair offers or make informed decisions. An AI that prioritizes users’ interests can build trust and help transition toward a more transparent digital economy.” The idea is rooted in giving users agency and control in a system where most AI is built to serve businesses. Panelists—including experts like Dazza Greenwood, Amir Sarhangi, and Tobin South—discussed how loyal, trustworthy AI advocates could reshape personal data rights, online trust, and legal accountability. 

Greenwood drew parallels to early internet-era reforms such as e-signatures and automated contracts, suggesting a similar legal evolution is needed now to govern AI agents. South added that AI agents must be “loyal by design,” ensuring they act within legal frameworks and always prioritize the user. Sarhangi introduced the concept of “Know Your Agent” (KYA), which promotes transparency by tracking the digital footprint of an AI. 

With unique agent wallets and activity histories, bad actors could be identified and held accountable. Fahs described a tool called “Permission Slip,” which automates user requests like data deletion. This form of AI advocacy predates current generative models but shows how user-authorized agents could manage privacy at scale. Agents could also learn from collective behavior. For instance, an AI noting a negative review of a product could share that experience with other agents, building an automated form of word-of-mouth. 

This concept, said panel moderator Sandy Pentland, mirrors how Consumer Reports aggregates user feedback to identify reliable products. South emphasized that cryptographic tools could ensure safe data-sharing without blindly trusting tech giants. He also referenced NANDA, a decentralized protocol from MIT that aims to enable trustworthy AI infrastructure. Still, implementing AI agents raises usability questions. “We want agents to understand nuanced permissions without constantly asking users to approve every action,” Fahs said. 

Getting this right will be crucial to user adoption. Pentland noted that current AI models struggle to align with individual preferences. “An effective agent must represent you—not a demographic group, but your unique values,” he said. Greenwood believes that’s now possible: “We finally have the tools to build AI agents with fiduciary responsibilities.” In closing, South stressed that the real bottleneck isn’t AI capability but structuring and contextualizing information properly. “If you want AI to truly act on your behalf, we must design systems that help it understand you.” 

As AI becomes deeply embedded in daily life, building personalized, privacy-conscious agents may be the key to ensuring technology serves people—not the other way around.

New Report Ranks Best And Worst Generative AI Tools For Privacy

 

Most generative AI companies use client data to train their chatbots. For this, they may use private or public data. Some services take a more flexible and non-intrusive approach to gathering customer data. Not so much for others. A recent analysis from data removal firm Incogni weighs the benefits and drawbacks of AI in terms of protecting your personal data and privacy.

As part of its "Gen AI and LLM Data Privacy Ranking 2025," Incogni analysed nine well-known generative AI services and evaluated their data privacy policies using 11 distinct factors. The following queries were addressed by the criteria: 

  • What kind of data do the models get trained on? 
  • Is it possible to train the models using user conversations? 
  • Can non-service providers or other appropriate entities receive prompts? 
  • Can the private data from users be erased from the training dataset?
  • How clear is it when training is done via prompts? 
  • How simple is it to locate details about the training process of models? 
  • Does the data collection process have a clear privacy policy?
  • How easy is it to read the privacy statement? 
  • Which resources are used to gather information about users?
  • Are third parties given access to the data? 
  • What information are gathered by the AI apps? 

The research involved Mistral AI's Le Chat, OpenAI's ChatGPT, xAI's Grok, Anthropic's Claude, Inflection AI's Pi, DeekSeek, Microsoft Copilot, Google Gemini, and Meta AI. Each AI performed well on certain questions but not so well on others. 

For instance, Grok performed poorly on the readability of its privacy policy but received a decent rating for how clearly it communicates that prompts are used for training. As another example, the ratings that ChatGPT and Gemini received for gathering data from their mobile apps varied significantly between the iOS and Android versions.

However, Le Chat emerged as the best privacy-friendly AI service overall. It did well in the transparency category, despite losing a few points. Additionally, it only collects a small amount of data and achieves excellent scores for additional privacy concerns unique to AI. 

Second place went to ChatGPT. Researchers at Incogni were a little worried about how user data interacts with the service and how OpenAI trains its models. However, ChatGPT explains the company's privacy standards in detail, lets you know what happens to your data, and gives you explicit instructions on how to restrict how your data is used. Claude and PI came in third and fourth, respectively, after Grok. Each performed reasonably well in terms of protecting user privacy overall, while there were some issues in certain areas. 

"Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind," Incogni noted in its report. "These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models. ChatGPT turned out to be the most transparent about whether prompts will be used for model training and had a clear privacy policy.” 

In its investigation, Incogni discovered that AI firms exchange data with a variety of parties, including service providers, law enforcement, members of the same corporate group, research partners, affiliates, and third parties. 

"Microsoft's privacy policy implies that user prompts may be shared with 'third parties that perform online advertising services for Microsoft or that use Microsoft's advertising technologies,'" Incogni added in the report. "DeepSeek's and Meta's privacy policies indicate that prompts can be shared with companies within its corporate group. Meta's and Anthropic's privacy policies can reasonably be understood to indicate that prompts are shared with research collaborators.” 

You can prevent the models from being trained using your prompts with some providers. This is true for Grok, Mistral AI, Copilot, and ChatGPT. However, based on their privacy rules and other resources, it appears that other services do not allow this kind of data collecting to be stopped. Gemini, DeepSeek, Pi AI, and Meta AI are a few of these. In response to this concern, Anthropic stated that it never gathers user input for model training. 

Ultimately, a clear and understandable privacy policy significantly helps in assisting you in determining what information is being gathered and how to opt out.

WhatsApp Ads Delayed in EU as Meta Faces Privacy Concerns

 

Meta recently introduced in-app advertisements within WhatsApp for users across the globe, marking the first time ads have appeared on the messaging platform. However, this change won’t affect users in the European Union just yet. According to the Irish Data Protection Commission (DPC), WhatsApp has informed them that ads will not be launched in the EU until sometime in 2026. 

Previously, Meta had stated that the feature would gradually roll out over several months but did not provide a specific timeline for European users. The newly introduced ads appear within the “Updates” tab on WhatsApp, specifically inside Status posts and the Channels section. Meta has stated that the ad system is designed with privacy in mind, using minimal personal data such as location, language settings, and engagement with content. If a user has linked their WhatsApp with the Meta Accounts Center, their ad preferences across Instagram and Facebook will also inform what ads they see. 

Despite these assurances, the integration of data across platforms has raised red flags among privacy advocates and European regulators. As a result, the DPC plans to review the advertising model thoroughly, working in coordination with other EU privacy authorities before approving a regional release. Des Hogan, Ireland’s Data Protection Commissioner, confirmed that Meta has officially postponed the EU launch and that discussions with the company will continue to assess the new ad approach. 

Dale Sunderland, another commissioner at the DPC, emphasized that the process remains in its early stages and it’s too soon to identify any potential regulatory violations. The commission intends to follow its usual review protocol, which applies to all new features introduced by Meta. This strategic move by Meta comes while the company is involved in a high-profile antitrust case in the United States. The lawsuit seeks to challenge Meta’s ownership of WhatsApp and Instagram and could potentially lead to a forced breakup of the company’s assets. 

Meta’s decision to push forward with deeper cross-platform ad integration may indicate confidence in its legal position. The tech giant continues to argue that its advertising tools are essential for small business growth and that any restrictions on its ad operations could negatively impact entrepreneurs who rely on Meta’s platforms for customer outreach. However, critics claim this level of integration is precisely why Meta should face stricter regulatory oversight—or even be broken up. 

As the U.S. court prepares to issue a ruling, the EU delay illustrates how Meta is navigating regulatory pressures differently across markets. After initial reporting, WhatsApp clarified that the 2025 rollout in the EU was never confirmed, and the current plan reflects ongoing conversations with European regulators.

Meta.ai Privacy Lapse Exposes User Chats in Public Feed

 

Meta’s new AI-driven chatbot platform, Meta.ai, launched recently with much fanfare, offering features like text and voice chats, image generation, and video restyling. Designed to rival platforms like ChatGPT, the app also includes a Discover feed, a space intended to showcase public content generated by users. However, what Meta failed to communicate effectively was that many users were unintentionally sharing their private conversations in this feed—sometimes with extremely sensitive content attached. 

In May, journalists flagged the issue when they discovered public chats revealing deeply personal user concerns—ranging from financial issues and health anxieties to legal troubles. These weren’t obscure posts either; they appeared in a publicly accessible area of the app, often containing identifying information. Conversations included users seeking help with medical diagnoses, children talking about personal experiences, and even incarcerated individuals discussing legal strategies—none of whom appeared to realize their data was visible to others. 

Despite some recent tweaks to the app’s sharing settings, disturbing content still appears on the Discover feed. Users unknowingly uploaded images and video clips, sometimes including faces, alongside alarming or bizarre prompts. One especially troubling instance featured a photo of a child at school, accompanied by a prompt instructing the AI to “make him cry.” Such posts reflect not only poor design choices but also raise ethical questions about the purpose and moderation of the Discover feed itself. 

The issue evokes memories of other infamous data exposure incidents, such as AOL’s release of anonymized user searches in 2006, which provided unsettling insight into private thoughts and behaviors. While social media platforms are inherently public, users generally view AI chat interactions as private, akin to using a search engine. Meta.ai blurred that boundary—perhaps unintentionally, but with serious consequences. Many users turned to Meta.ai seeking support, companionship, or simple productivity help. Some asked for help with job listings or obituary writing, while others vented emotional distress or sought comfort during panic attacks. 

In some cases, users left chats expressing gratitude—believing the bot had helped. But a growing number of conversations end in frustration or embarrassment when users realize the bot cannot deliver on its promises or that their content was shared publicly. These incidents highlight a disconnect between how users engage with AI tools and how companies design them. Meta’s ambition to merge AI capabilities with social interaction seems to have ignored the emotional and psychological expectations users bring to private-sounding features. 

For those using Meta.ai as a digital confidant, the lack of clarity around privacy settings has turned an experiment in convenience into a public misstep. As AI systems become more integrated into daily life, companies must rethink how they handle user data—especially when users assume privacy. Meta.ai’s rocky launch serves as a cautionary tale about transparency, trust, and design in the age of generative AI.

PocketPal AI Brings Offline AI Chatbot Experience to Smartphones With Full Data Privacy

 

In a digital world where most AI chatbots rely on cloud computing and constant internet connectivity, PocketPal AI takes a different approach by offering an entirely offline, on-device chatbot experience. This free app brings AI processing power directly onto your smartphone, eliminating the need to send data back and forth across the internet. Conventional AI chatbots typically transmit your interactions to distant servers, where the data is processed before a response is returned. That means even sensitive or routine conversations can be stored remotely, raising concerns about privacy, data usage, and the potential for misuse.

PocketPal AI flips this model by handling all computation on your device, ensuring your data never leaves your phone unless you explicitly choose to save or share it. This local processing model is especially useful in areas with unreliable internet or no access at all. Whether you’re traveling in rural regions, riding the metro, or flying, PocketPal AI works seamlessly without needing a connection. 

Additionally, using an AI offline helps reduce mobile data consumption and improves speed, since there’s no delay waiting for server responses. The app is available on both iOS and Android and offers users the ability to interact with compact but capable language models. While you do need an internet connection during the initial setup to download a language model, once that’s done, PocketPal AI functions completely offline. To begin, users select a model from the app’s library or upload one from their device or from the Hugging Face community. 

Although the app lists models without detailed descriptions, users can consult external resources to understand which model is best for their needs—whether it’s from Meta, Microsoft, or another developer. After downloading a model—most of which are several gigabytes in size—users simply tap “Load” to activate the model, enabling conversations with their new offline assistant. 

For those more technically inclined, PocketPal AI includes advanced settings for switching between models, adjusting inference behavior, and testing performance. While these features offer great flexibility, they’re likely best suited for power users. On high-end devices like the Pixel 9 Pro Fold, PocketPal AI runs smoothly and delivers fast responses. 

However, older or budget devices may face slower load times or stuttering performance due to limited memory and processing power. Because offline models must be optimized for device constraints, they tend to be smaller in size and capabilities compared to cloud-based systems. As a result, while PocketPal AI handles common queries, light content generation, and basic conversations well, it may not match the contextual depth and complexity of large-scale models hosted in the cloud. 

Even with these trade-offs, PocketPal AI offers a powerful solution for users seeking AI assistance without sacrificing privacy or depending on an internet connection. It delivers a rare combination of utility, portability, and data control in today’s cloud-dominated AI ecosystem. 

As privacy awareness and concerns about centralized data storage continue to grow, PocketPal AI represents a compelling alternative—one that puts users back in control of their digital interactions, no matter where they are.

Google’s AI Virtual Try-On Tool Redefines Online Shopping Experience

 

At the latest Google I/O developers conference, the tech giant introduced an unexpected innovation in online shopping: an AI-powered virtual try-on tool. This new feature lets users upload a photo of themselves and see how clothing items would appear on their body. By merging the image of the user with that of the garment, Google’s custom-built image generation model creates a realistic simulation of the outfit on the individual. 

While the concept seems simple, the underlying AI technology is advanced. In a live demonstration, the tool appeared to function seamlessly. The feature is now available in the United States and is part of Google’s broader efforts to enhance the online shopping experience through AI integration. It’s particularly useful for people who often struggle to visualize how clothing will look on their body compared to how it appears on models.  

However, the rollout of this tool raised valid questions about user privacy. AI systems that involve personal images often come with concerns over data usage. Addressing these worries, a Google representative clarified that uploaded photos are used exclusively for the try-on experience. The images are not stored for AI training, are not shared with other services or third parties, and users can delete or update their photos at any time. This level of privacy protection is notable in an industry where user data is typically leveraged to improve algorithms. 

Given Google’s ongoing development of AI-driven tools, some expected the company to utilize this photo data for model training. Instead, the commitment to user privacy in this case suggests a more responsible approach. Virtual fitting technology isn’t entirely new. Retail and tech companies have been exploring similar ideas for years. Amazon, for instance, has experimented with AI tools in its fashion division. Google, however, claims its new tool offers a more in-depth understanding of diverse body types. 

During the presentation, Vidhya Srinivasan, Google’s VP of ads and commerce, emphasized the system’s goal of accommodating different shapes and sizes more effectively. Past AI image tools have faced criticism for lacking diversity and realism. It’s unclear whether Google’s new tool will be more reliable across the board. Nevertheless, their assurance that user images won’t be used to train models helps build trust. 

Although the virtual preview may not always perfectly reflect real-life appearances, this development points to a promising direction for AI in retail. If successful, it could improve customer satisfaction, reduce returns, and make online shopping a more personalized experience.

Beware iPhone Users: Indian Government Issues Urgent Advisory Over Data Theft Risk

 

The Indian government has issued an urgent security warning to iPhone and iPad users, citing major flaws in Apple's iOS and iPadOS software. If not addressed, these vulnerabilities could allow cybercriminals to access sensitive user data or make devices inoperable. The advisory was issued by the Indian Computer Emergency Response Team (CERT-In), which is part of the Ministry of Electronics and Information Technology, and urged users to act immediately.

Apple devices running older versions of iOS (before to 18.3) and iPadOS (prior to 17.7.3 or 18.3) are particularly vulnerable to the security flaws. The iPad Pro (2nd generation and up), iPad 6th generation and later, iPad Air (3rd generation and up), and iPad mini (5th generation and later) are among the popular models that fall within this category, as are the iPhone XS and newer. 

A key aspect of Apple's message system, the Darwin notification system, is one of the major flaws. The vulnerability enables unauthorised apps to send system-level notifications without requiring additional permissions. The device could freeze or crash if it is exploited, necessitating user intervention to restore functionality.

These flaws present serious threats. Hackers could gain access to sensitive information such as personal details, financial information, and so on. In other cases, they could circumvent the device's built-in security protections, running malicious code that jeopardises the system's integrity. In the worst-case situation, a hacker could crash the device, rendering it completely unusable. CERT-In has also confirmed that some of these flaws are actively abused by hackers, emphasising the need for users to act quickly. 

Apple has responded by releasing security upgrades to fix these vulnerabilities. It is highly recommended that impacted users update to the most latest version of iOS or iPadOS on their devices as soon as feasible. To defend against any threats, this update is critical. Additionally, users are cautioned against downloading suspicious or unverified apps as they could act as entry points for malware. It's also critical to monitor any unusual device behaviour as it may be related to a security risk. 

As Apple's footprint in India grows, it is more critical than ever that people remain informed and cautious. Regular software upgrades and sensible, cautious usage patterns are critical for guarding against the growing threat of cyber assaults. iPhone and iPad users can improve the security of their devices and sensitive data by taking proactive measures.

Here's Why Websites Are Offering "Ad-Lite" Premium Subscriptions

 

Some websites allow you to totally remove adverts after subscribing, while others now offer "ad-lite" memberships. However, when you subscribe to ad-supported streaming services, you do not get the best value. 

Not removing all ads

Ads are a significant source of income for many websites, despite the fact that they can be annoying. Additionally, a lot of websites are aware of ad-blockers, so outdated methods may no longer be as effective.

For websites, complete memberships without advertisements are a decent compromise because adverts aren't going away. The website may continue to make money to run while providing users with an ad-free experience. In this case, everybody wins. 

However, ad-lite subscriptions are not always the most cost-effective option. Rather than fully blocking adverts, you do not see personalised ads. While others may disagree, I can't see how this would encourage me to subscribe; I'd rather pay an extra few dollars per month to completely remove them. 

In addition to text-based websites, YouTube has tested a Premium Lite tool. Although not all videos are ad-free, the majority are. Subscribing makes no sense for me if the videos with advertisements are on topics I'm interested in. 

Using personal data 

Many websites will track your behaviour because many advertisements are tailored to your preferences. Advertisers can then use this information to recommend items and services that they believe you would be interested in.

Given that many people have been more concerned about their privacy in recent years, it's reasonable that some may wish to pay money to prevent having their data used. While this is occasionally the case, certain websites may continue to utilise your information even after you subscribe to an ad-lite tier. 

Websites continue to require user information in order to get feedback and improve their services. As a result, your data may still be used in certain scenarios. The key distinction is that it will rarely be used for advertising; while this may be sufficient for some, others may find it more aggravating. It is difficult to avoid being tracked online under any circumstances. You can still be tracked while browsing in incognito or private mode.

Use ad-free version

Many websites with ad-lite tiers also provide totally ad-free versions. When you subscribe to them, you will not receive any personalised or non-personalised advertisements. Furthermore, you frequently get access to exclusive and/or infinite content, allowing you to fully support your preferred publications. Rather than focussing on the price, evaluate how much value you'll gain from subscribing to an ad-free tier. It's usually less expensive than ad-lite. 

Getting an ad-lite membership is essentially the worst of everything you were attempting to avoid. You'll still get adverts, but they'll be less personal. Furthermore, you may see adverts on stuff you appreciate while paying for ad-free access to something you do not care about. It's preferable to pay for the complete version.

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp claims even it can not process private data

WhatsApp has introduced ‘Private Processing,’ a new tech that allows users to use advanced AI features by offloading tasks to privacy-preserving cloud servers, without exposing their chat to Meta. Meta claims even it cannot see the messages while processing them. The system employs encrypted cloud infrastructure and hardware-based isolation without making it visible to anyone, even Meta, or processing data. 

About private processing

For those who decide to use Private Processing, the system works in an obscure verification via the user’s WhatsApp client to confirm the user’s validity. 

Meta claims this system keeps WhatsApp’s end-to-end encryption intact while offering AI features in chats. However, the feature currently applies only to select use cases and excludes Meta’s broader AI deployments, including those used in India’s public service systems.

Private processing employs Trusted Execution Environments (TEEs) — safe virtual machines that use cloud infrastructure to keep AI requests hidden. 

About the system

  • Performs encryption of user requests from the system to the TEE utilizing end-to-end encryption
  • Encrypts user requests from the device to the TEE using end-to-end encryption
  • Restricts storage or logging of messages post-processing
  • Reports logs and binary images for external verification and audits

WhatsApp builds AI through wider privacy concerns

According to Meta, the Private processing is a response to privacy questions around AI and messaging. WhatsApp has now joined other companies like Apple that have introduced confidential AI computing models in the previous year. “To validate our implementation of these and other security principles, independent security researchers will be able to continuously verify our privacy and security architecture and its integrity,” Meta said.

It is similar to Apple’s private cloud computing in terms of public transparency and stateless processing. Currently, however, WhatsApp is using them only for select features. Apple, on the other hand, has declared plans to implement this model throughout all its AI tools, whereas WhatsApp has not made such claims, yet. 

WhatsApp says, “Private Processing uses anonymous credentials to authenticate users over OHTTP. This way, Private Processing can authenticate users to the Private Processing system but remains unable to identify them.”

ProtectEU and VPN Privacy: What the EU Encryption Plan Means for Online Security

 

Texting through SMS is pretty much a thing of the past. Most people today rely on apps like WhatsApp and Signal to share messages, make encrypted calls, or send photos—all under the assumption that our conversations are private. But that privacy could soon be at risk in the EU.

On April 1, 2025, the European Commission introduced a new plan called ProtectEU. Its goal is to create a roadmap for “lawful and effective access to data for law enforcement,” particularly targeting encrypted platforms. While messaging apps are the immediate focus, VPN services might be next. VPNs rely on end-to-end encryption and strict no-log policies to keep users anonymous. However, if ProtectEU leads to mandatory encryption backdoors or expanded data retention rules, that could force VPN providers to change how they operate—or leave the EU altogether. 

Proton VPN’s Head of Public Policy, Jurgita Miseviciute, warns that weakening encryption won’t solve security issues. Instead, she believes it would put users at greater risk, allowing bad actors to exploit the same access points created for law enforcement. Proton is monitoring the plan closely, hoping the EU will consider solutions that protect encryption. Surfshark takes a more optimistic view. Legal Head Gytis Malinauskas says the strategy still lacks concrete policy direction and sees the emphasis on cybersecurity as a potential boost for privacy tools like VPNs. Mullvad VPN isn’t convinced. 

Having fought against earlier EU proposals to scan private chats, Mullvad criticized ProtectEU as a rebranded version of old policies, expressing doubt it will gain wide support. One key concern is data retention. If the EU decides to require VPNs to log user activity, it could fundamentally conflict with their privacy-first design. Denis Vyazovoy of AdGuard VPN notes that such laws could make no-log VPNs unfeasible, prompting providers to exit the EU market—much like what happened in India in 2022. NordVPN adds that the more data retained, the more risk users face from breaches or misuse. 

Even though VPNs aren’t explicitly targeted yet, an EU report has listed them as a challenge to investigations—raising concerns about future regulations. Still, Surfshark sees the current debate as a chance to highlight the legitimate role VPNs play in protecting everyday users. While the future remains uncertain, one thing is clear: the tension between privacy and security is only heating up.

Best Encrypted Messaging Apps: Signal vs Telegram vs WhatsApp Privacy Guide

 

Encrypted messaging apps have become essential tools in the age of cyber threats and surveillance. With rising concerns over data privacy, especially after recent high-profile incidents, users are turning to platforms that offer more secure communication. Among the top contenders are Signal, Telegram, and WhatsApp—each with its own approach to privacy, encryption, and data handling. 

Signal is widely regarded as the gold standard when it comes to messaging privacy. Backed by a nonprofit foundation and funded through grants and donations, Signal doesn’t rely on user data for profit. It collects minimal information—just your phone number—and offers strong on-device privacy controls, like disappearing messages and call relays to mask IP addresses. Being open-source, Signal allows independent audits of its code, ensuring transparency. Even when subpoenaed, the app could only provide limited data like account creation date and last connection, making it a favorite among journalists, whistleblowers, and privacy advocates.  

Telegram offers a broader range of features but falls short on privacy. While it supports end-to-end encryption, this is limited only to its “secret chats,” and not enabled by default in regular messages or public channels. Telegram also stores metadata, such as IP addresses and contact info, and recently updated its privacy policy to allow data sharing with authorities under legal requests. Despite this, it remains popular for public content sharing and large group chats, thanks to its forum-like structure and optional paid features. 

WhatsApp, with over 2 billion users, is the most widely used encrypted messaging app. It employs the same encryption protocol as Signal, ensuring end-to-end protection for chats and calls. However, as a Meta-owned platform, it collects significant user data—including device information, usage logs, and location data. Even people not using WhatsApp can have their data collected via synced contacts. While messages remain encrypted, the amount of metadata stored makes it less privacy-friendly compared to Signal. 

All three apps offer some level of encrypted messaging, but Signal stands out for its minimal data collection, open-source transparency, and commitment to privacy. Telegram provides a flexible chat experience with weaker privacy controls, while WhatsApp delivers strong encryption within a data-heavy ecosystem. Choosing the best encrypted messaging app depends on what you prioritize more: security, features, or convenience.

Apple and Google App Stores Host VPN Apps Linked to China, Face Outrage

Apple and Google App Stores Host VPN Apps Linked to China, Face Outrage

Google (GOOGL) and Apple (AAPL) are under harsh scrutiny after a recent report disclosed that their app stores host VPN applications associated with a Chinese cybersecurity firm, Qihoo 360. The U.S government has blacklisted the firm. The Financial Times reports that 5 VPNs still available to U.S users, such as VPN Proxy master and Turbo VPN, are linked to Qihoo. It was sanctioned in 2020 on the charges of alleged military ties. 

Ilusion of Privacy: VPNs collecting data 

In 2025 alone, three VPN apps have had over a million downloads on Google Play and  Apple’s App Store, suggesting these aren’t small-time apps, Sensor Tower reports. They are advertised as “private browsing” tools, but the VPNs provide the companies with complete user data of their online activity. This is alarming because China’s national security laws mandate that companies give user data if the government demands it. 

Concerns around ownership structures

The intricate web of ownership structures raises important questions; the apps are run by Singapore-based Innovative Connecting, owned by Lemon Seed, a Cayman Islands firm. Qihoo acquired Lemon Seed for $69.9 million in 2020. The company claimed to sell the business months late, but FT reports the China-based team making the applications were still under Qihoo’s umbrella for years. According to FT, a developer said, “You could say that we’re part of them, and you could say we’re not. It’s complicated.”

Amid outrage, Google and Apple respond 

Google said it strives to follow sanctions and remove violators when found. Apple has removed two apps- Snap VPN and Thunder VPN- after FT contacted the business, claiming it follows strict rules on VPN data-sharing.

Privacy scare can damage stock valuations

What Google and Apple face is more than public outage. Investors prioritise data privacy, and regulatory threat has increased, mainly with growing concerns around U.S tech firms’ links to China. If the U.S government gets involved, it can result in stricter rules, fines, and even more app removals. If this happens, shareholders won’t be happy. 

According to FT, “Innovative Connecting said the content of the article was not accurate and declined to comment further. Guangzhou Lianchuang declined to comment. Qihoo and Chen Ningyi did not respond to requests for comment.”

Orion Brings Fully Homomorphic Encryption to Deep Learning for AI Privacy

 

As data privacy becomes an increasing concern, a new artificial intelligence (AI) encryption breakthrough could transform how sensitive information is handled. Researchers Austin Ebel, Karthik Garimella, and Assistant Professor Brandon Reagen have developed Orion, a framework that integrates fully homomorphic encryption (FHE) into deep learning. 

This advancement allows AI systems to analyze encrypted data without decrypting it, ensuring privacy throughout the process. FHE has long been considered a major breakthrough in cryptography because it enables computations on encrypted information while keeping it secure. However, applying this method to deep learning has been challenging due to the heavy computational requirements and technical constraints. Orion addresses these challenges by automating the conversion of deep learning models into FHE-compatible formats. 

The researchers’ study, recently published on arXiv and set to be presented at the 2025 ACM International Conference on Architectural Support for Programming Languages and Operating Systems, highlights Orion’s ability to make privacy-focused AI more practical. One of the biggest concerns in AI today is that machine learning models require direct access to user data, raising serious privacy risks. Orion eliminates this issue by allowing AI to function without exposing sensitive information. The framework is built to work with PyTorch, a widely used machine learning library, making it easier for developers to integrate FHE into existing models. 

Orion also introduces optimization techniques that reduce computational burdens, making privacy-preserving AI more efficient and scalable. Orion has demonstrated notable performance improvements, achieving speeds 2.38 times faster than previous FHE deep learning methods. The researchers successfully implemented high-resolution object detection using the YOLO-v1 model, which contains 139 million parameters—a scale previously considered impractical for FHE. This progress suggests Orion could enable encrypted AI applications in sectors like healthcare, finance, and cybersecurity, where protecting user data is essential. 

A key advantage of Orion is its accessibility. Traditional FHE implementations require specialized knowledge, making them difficult to adopt. Orion simplifies the process, allowing more developers to use the technology without extensive training. By open-sourcing the framework, the research team hopes to encourage further innovation and adoption. As AI continues to expand into everyday life, advancements like Orion could help ensure that technological progress does not come at the cost of privacy and security.

AI and Privacy – Issues and Challenges

 

Artificial intelligence is changing cybersecurity and digital privacy. It promises better security but also raises concerns about ethical boundaries, data exploitation, and spying. From facial recognition software to predictive crime prevention, customers are left wondering where to draw the line between safety and overreach as AI-driven systems become more and more integrated into daily life.

The same artificial intelligence (AI) tools that aid in spotting online threats, optimising security procedures, and stopping fraud can also be used for intrusive data collecting, behavioural tracking, and mass spying. The use of AI-powered surveillance in corporate data mining, law enforcement profiling, and government tracking has drawn criticism in recent years. AI runs the potential of undermining rather than defending basic rights in the absence of clear regulations and transparency. 

AI and data ethics

Despite encouraging developments, there are numerous instances of AI-driven inventions going awry, which raise serious questions. A face recognition business called Clearview AI amassed one of the largest facial recognition databases in the world by illegally scraping billions of photos from social media. Clearview's technology was employed by governments and law enforcement organisations across the globe, leading to legal action and regulatory action about mass surveillance. 

The UK Department for Work and Pensions used an AI system to detect welfare fraud. An internal investigation suggested that the system disproportionately targeted people based on their age, handicap, marital status, and country. This prejudice resulted in certain groups being unfairly picked for fraud investigations, raising questions about discrimination and the ethical use of artificial intelligence in public services. Despite earlier guarantees of impartiality, the findings have fuelled calls for increased openness and supervision in government AI use. 

Regulations and consumer protection

The ethical use of AI is being regulated by governments worldwide, with a number of significant regulations having an immediate impact on consumers. The AI Act of the European Union, which is scheduled to go into force in 2025, divides AI applications into risk categories. 

Strict regulations will be applied to high-risk technology, like biometric surveillance and facial recognition, to guarantee transparency and moral deployment. The EU's commitment to responsible AI governance is further reinforced by the possibility of severe sanctions for non compliant companies. 

Individuals in the United States have more control over their personal data according to California's Consumer Privacy Act. Consumers have the right to know what information firms gather about them, to seek its erasure, and to opt out of data sales. This rule adds an important layer of privacy protection in an era where AI-powered data processing is becoming more common. 

The White House has recently introduced the AI Bill of Rights, a framework aimed at encouraging responsible AI practices. While not legally enforceable, it emphasises the need of privacy, transparency, and algorithmic fairness, pointing to a larger push for ethical AI development in policy making.

Encryption Under Siege: A New Wave of Attacks Intensifies

 

Over the past decade, encrypted communication has become a standard for billions worldwide. Platforms like Signal, iMessage, and WhatsApp use default end-to-end encryption, ensuring user privacy. Despite widespread adoption, governments continue pushing for greater access, threatening encryption’s integrity.

Recently, authorities in the UK, France, and Sweden have introduced policies that could weaken encryption, adding to EU and Indian regulatory measures that challenge privacy. Meanwhile, US intelligence agencies, previously critical of encryption, now advocate for its use after major cybersecurity breaches. The shift follows an incident where the China-backed hacking group Salt Typhoon infiltrated US telecom networks. Simultaneously, the second Trump administration is expanding surveillance of undocumented migrants and reassessing intelligence-sharing agreements.

“The trend is bleak,” says Carmela Troncoso, privacy and cryptography researcher at the Max-Planck Institute for Security and Privacy. “New policies are emerging that undermine encryption.”

Law enforcement argues encryption obstructs criminal investigations, leading governments to demand backdoor access to encrypted platforms. Experts warn such access could be exploited by malicious actors, jeopardizing security. Apple, for example, recently withdrew its encrypted iCloud backup system from the UK after receiving a secret government order. The company’s compliance would require creating a backdoor, a move expected to be challenged in court on March 14. Similarly, Sweden is considering laws requiring messaging services like Signal and WhatsApp to retain message copies for law enforcement access, prompting Signal to threaten market exit.

“Some democracies are reverting to crude approaches to circumvent encryption,” says Callum Voge, director of governmental affairs at the Internet Society.

A growing concern is client-side scanning, a technology that scans messages on users’ devices before encryption. While presented as a compromise, experts argue it introduces vulnerabilities. The EU has debated its implementation for years, with some member states advocating stronger encryption while others push for increased surveillance. Apple abandoned a similar initiative after warning that scanning for one type of content could pave the way for mass surveillance.

“Europe is divided, with some countries strongly in favor of scanning and others strongly against it,” says Voge.

Another pressing threat is the potential banning of encrypted services. Russia blocked Signal in 2024, while India’s legal battle with WhatsApp could force the platform to abandon encryption or exit the market. The country has already prohibited multiple VPN services, further limiting digital privacy options.

Despite mounting threats, pro-encryption responses have emerged. The US Cybersecurity and Infrastructure Security Agency and the FBI have urged encrypted communication use following recent cybersecurity breaches. Sweden’s armed forces also endorse Signal for unclassified communications, recognizing its security benefits.

With the UK’s March 14 legal proceedings over Apple’s backdoor request approaching, US senators and privacy organizations are demanding greater transparency. UK civil rights groups are challenging the confidential nature of such surveillance orders.

“The UK government may have come for Apple today, but tomorrow it could be Google, Microsoft, or even your VPN provider,” warns Privacy International.

Encryption remains fundamental to human rights, safeguarding free speech, secure communication, and data privacy. “Encryption is crucial because it enables a full spectrum of human rights,” says Namrata Maheshwari of Access Now. “It supports privacy, freedom of expression, organization, and association.”

As governments push for greater surveillance, the fight for encryption and privacy continues, shaping the future of digital security worldwide.


How Data Removal Services Protect Your Online Privacy from Brokers

 

Data removal services play a crucial role in safeguarding online privacy by helping individuals remove their personal information from data brokers and people-finding websites. Every time users browse the internet, enter personal details on websites, or use search engines, they leave behind a digital footprint. This data is often collected by aggregators and sold to third parties, including marketing firms, advertisers, and even organizations with malicious intent. With data collection becoming a billion-dollar industry, the need for effective data removal services has never been more urgent. 

Many people are unaware of how much information is available about them online. A simple Google search may reveal social media profiles, public records, and forum posts, but this is just the surface. Data brokers go even further, gathering information from browsing history, purchase records, loyalty programs, and public documents such as birth and marriage certificates. This data is then packaged and sold to interested buyers, creating a detailed digital profile of individuals without their explicit consent. 

Data removal services work by identifying where a person’s data is stored, sending removal requests to brokers, and ensuring that information is deleted from their records. These services automate the process, saving users the time and effort required to manually request data removal from hundreds of sources. Some of the most well-known data removal services include Incogni, Aura, Kanary, and DeleteMe. While each service may have a slightly different approach, they generally follow a similar process. Users provide their personal details, such as name, email, and address, to the data removal service. 

The service then scans databases of data brokers and people-finder sites to locate where personal information is being stored. Automated removal requests are sent to these brokers, requesting the deletion of personal data. While some brokers comply with these requests quickly, others may take longer or resist removal efforts. A reliable data removal service provides transparency about the process and expected timelines, ensuring users understand how their information is being handled. Data brokers profit immensely from selling personal data, with the industry estimated to be worth over $400 billion. 

Major players like Experian, Equifax, and Acxiom collect a wide range of information, including addresses, birth dates, family status, hobbies, occupations, and even social security numbers. People-finding services, such as BeenVerified and Truthfinder, operate similarly by aggregating publicly available data and making it easily accessible for a fee. Unfortunately, this information can also fall into the hands of bad actors who use it for identity theft, fraud, or online stalking. 

For individuals concerned about privacy, data removal services offer a proactive way to reclaim control over personal information. Journalists, victims of stalking or abuse, and professionals in sensitive industries particularly benefit from these services. However, in an age where data collection is a persistent and lucrative business, staying vigilant and using trusted privacy tools is essential for maintaining online anonymity.

Microsoft MUSE AI: Revolutionizing Game Development with WHAM and Ethical Challenges

 

Microsoft has developed MUSE, a cutting-edge AI model that is set to redefine how video games are created and experienced. This advanced system leverages artificial intelligence to generate realistic gameplay elements, making it easier for developers to design and refine virtual environments. By learning from vast amounts of gameplay data, MUSE can predict player actions, create immersive worlds, and enhance game mechanics in ways that were previously impossible. While this breakthrough technology offers significant advantages for game development, it also raises critical discussions around data security and ethical AI usage. 

One of MUSE’s most notable features is its ability to automate and accelerate game design. Developers can use the AI model to quickly prototype levels, test different gameplay mechanics, and generate realistic player interactions. This reduces the time and effort required for manual design while allowing for greater experimentation and creativity. By streamlining the development process, MUSE provides game studios—both large and small—the opportunity to push the boundaries of innovation. 

The AI system is built on an advanced framework that enables it to interpret and respond to player behaviors. By analyzing game environments and user inputs, MUSE can dynamically adjust in-game elements to create more engaging experiences. This could lead to more adaptive and personalized gaming, where the AI tailors challenges and story progression based on individual player styles. Such advancements have the potential to revolutionize game storytelling and interactivity. 

Despite its promising capabilities, the introduction of AI-generated gameplay also brings important concerns. The use of player data to train these models raises questions about privacy and transparency. Developers must establish clear guidelines on how data is collected and ensure that players have control over their information. Additionally, the increasing role of AI in game creation sparks discussions about the balance between human creativity and machine-generated content. 

While AI can enhance development, it is essential to preserve the artistic vision and originality that define gaming as a creative medium. Beyond gaming, the technology behind MUSE could extend into other industries, including education and simulation-based training. AI-generated environments can be used for virtual learning, professional skill development, and interactive storytelling in ways that go beyond traditional gaming applications. 

As AI continues to evolve, its role in shaping digital experiences will expand, making it crucial to address ethical considerations and responsible implementation. The future of AI-driven game development is still unfolding, but MUSE represents a major step forward. 

By offering new possibilities for creativity and efficiency, it has the potential to change how games are built and played. However, the industry must carefully navigate the challenges that come with AI’s growing influence, ensuring that technological progress aligns with ethical and artistic integrity.

The Need for Unified Data Security, Compliance, and AI Governance

 

Businesses are increasingly dependent on data, yet many continue to rely on outdated security infrastructures and fragmented management approaches. These inefficiencies leave organizations vulnerable to cyber threats, compliance violations, and operational disruptions. Protecting data is no longer just about preventing breaches; it requires a fundamental shift in how security, compliance, and AI governance are integrated into enterprise strategies. A proactive and unified approach is now essential to mitigate evolving risks effectively. 

The rapid advancement of artificial intelligence has introduced new security challenges. AI-powered tools are transforming industries, but they also create vulnerabilities if not properly managed. Many organizations implement AI-driven applications without fully understanding their security implications. AI models require vast amounts of data, including sensitive information, making governance a critical priority. Without robust oversight, these models can inadvertently expose private data, operate without transparency, and pose compliance challenges as new regulations emerge. 

Businesses must ensure that AI security measures evolve in tandem with technological advancements to minimize risks. Regulatory requirements are also becoming increasingly complex. Governments worldwide are enforcing stricter data privacy laws, such as GDPR and CCPA, while also introducing new regulations specific to AI governance. Non-compliance can result in heavy financial penalties, reputational damage, and operational setbacks. Businesses can no longer treat compliance as an afterthought; instead, it must be an integral part of their data security strategy. Organizations must shift from reactive compliance measures to proactive frameworks that align with evolving regulatory expectations. 

Another significant challenge is the growing issue of data sprawl. As businesses store and manage data across multiple cloud environments, SaaS applications, and third-party platforms, maintaining control becomes increasingly difficult. Security teams often lack visibility into where sensitive information resides, making it harder to enforce access controls and protect against cyber threats. Traditional security models that rely on layering additional tools onto existing infrastructures are no longer effective. A centralized, AI-driven approach to security and governance is necessary to address these risks holistically. 

Forward-thinking businesses recognize that managing security, compliance, and AI governance in isolation is inefficient. A unified approach consolidates risk management efforts into a cohesive, scalable framework. By breaking down operational silos, organizations can streamline workflows, improve efficiency through AI-driven automation, and proactively mitigate security threats. Integrating compliance and security within a single system ensures better regulatory adherence while reducing the complexity of data management. 

To stay ahead of emerging threats, organizations must modernize their approach to data security and governance. Investing in AI-driven security solutions enables businesses to automate data classification, detect vulnerabilities, and safeguard sensitive information at scale. Shifting from reactive compliance measures to proactive strategies ensures that regulatory requirements are met without last-minute adjustments. Moving away from fragmented security solutions and adopting a modular, scalable platform allows businesses to reduce risk and maintain resilience in an ever-evolving digital landscape. Those that embrace a forward-thinking, unified strategy will be best positioned for long-term success.

South Korea Blocks DeepSeek AI App Downloads Amid Data Security Investigation

 

South Korea has taken a firm stance on data privacy by temporarily blocking downloads of the Chinese AI app DeepSeek. The decision, announced by the Personal Information Protection Commission (PIPC), follows concerns about how the company collects and handles user data. 

While the app remains accessible to existing users, authorities have strongly advised against entering personal information until a thorough review is complete. DeepSeek, developed by the Chinese AI Lab of the same name, launched in South Korea earlier this year. Shortly after, regulators began questioning its data collection practices. 

Upon investigation, the PIPC discovered that DeepSeek had transferred South Korean user data to ByteDance, the parent company of TikTok. This revelation raised red flags, given the ongoing global scrutiny of Chinese tech firms over potential security risks. South Korea’s response reflects its increasing emphasis on digital sovereignty. The PIPC has stated that DeepSeek will only be reinstated on app stores once it aligns with national privacy regulations. 

The AI company has since appointed a local representative and acknowledged that it was unfamiliar with South Korea’s legal framework when it launched the service. It has now committed to working with authorities to address compliance issues. DeepSeek’s privacy concerns extend beyond South Korea. Earlier this month, key government agencies—including the Ministry of Trade, Industry, and Energy, as well as Korea Hydro & Nuclear Power—temporarily blocked the app on official devices, citing security risks. 

Australia has already prohibited the use of DeepSeek on government devices, while Italy’s data protection agency has ordered the company to disable its chatbot within its borders. Taiwan has gone a step further by banning all government departments from using DeepSeek AI, further illustrating the growing hesitancy toward Chinese AI firms. 

DeepSeek, founded in 2023 by Liang Feng in Hangzhou, China, has positioned itself as a competitor to OpenAI’s ChatGPT, offering a free, open-source AI model. However, its rapid expansion has drawn scrutiny over potential data security vulnerabilities, especially in regions wary of foreign digital influence. South Korea’s decision underscores the broader challenge of regulating artificial intelligence in an era of increasing geopolitical and technological tensions. 

As AI-powered applications become more integrated into daily life, governments are taking a closer look at the entities behind them, particularly when sensitive user data is involved. For now, DeepSeek’s future in South Korea hinges on whether it can address regulators’ concerns and demonstrate full compliance with the country’s strict data privacy standards. Until then, authorities remain cautious about allowing the app’s unrestricted use.