Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Data Privacy. Show all posts

Google’s AI Virtual Try-On Tool Redefines Online Shopping Experience

 

At the latest Google I/O developers conference, the tech giant introduced an unexpected innovation in online shopping: an AI-powered virtual try-on tool. This new feature lets users upload a photo of themselves and see how clothing items would appear on their body. By merging the image of the user with that of the garment, Google’s custom-built image generation model creates a realistic simulation of the outfit on the individual. 

While the concept seems simple, the underlying AI technology is advanced. In a live demonstration, the tool appeared to function seamlessly. The feature is now available in the United States and is part of Google’s broader efforts to enhance the online shopping experience through AI integration. It’s particularly useful for people who often struggle to visualize how clothing will look on their body compared to how it appears on models.  

However, the rollout of this tool raised valid questions about user privacy. AI systems that involve personal images often come with concerns over data usage. Addressing these worries, a Google representative clarified that uploaded photos are used exclusively for the try-on experience. The images are not stored for AI training, are not shared with other services or third parties, and users can delete or update their photos at any time. This level of privacy protection is notable in an industry where user data is typically leveraged to improve algorithms. 

Given Google’s ongoing development of AI-driven tools, some expected the company to utilize this photo data for model training. Instead, the commitment to user privacy in this case suggests a more responsible approach. Virtual fitting technology isn’t entirely new. Retail and tech companies have been exploring similar ideas for years. Amazon, for instance, has experimented with AI tools in its fashion division. Google, however, claims its new tool offers a more in-depth understanding of diverse body types. 

During the presentation, Vidhya Srinivasan, Google’s VP of ads and commerce, emphasized the system’s goal of accommodating different shapes and sizes more effectively. Past AI image tools have faced criticism for lacking diversity and realism. It’s unclear whether Google’s new tool will be more reliable across the board. Nevertheless, their assurance that user images won’t be used to train models helps build trust. 

Although the virtual preview may not always perfectly reflect real-life appearances, this development points to a promising direction for AI in retail. If successful, it could improve customer satisfaction, reduce returns, and make online shopping a more personalized experience.

Beware iPhone Users: Indian Government Issues Urgent Advisory Over Data Theft Risk

 

The Indian government has issued an urgent security warning to iPhone and iPad users, citing major flaws in Apple's iOS and iPadOS software. If not addressed, these vulnerabilities could allow cybercriminals to access sensitive user data or make devices inoperable. The advisory was issued by the Indian Computer Emergency Response Team (CERT-In), which is part of the Ministry of Electronics and Information Technology, and urged users to act immediately.

Apple devices running older versions of iOS (before to 18.3) and iPadOS (prior to 17.7.3 or 18.3) are particularly vulnerable to the security flaws. The iPad Pro (2nd generation and up), iPad 6th generation and later, iPad Air (3rd generation and up), and iPad mini (5th generation and later) are among the popular models that fall within this category, as are the iPhone XS and newer. 

A key aspect of Apple's message system, the Darwin notification system, is one of the major flaws. The vulnerability enables unauthorised apps to send system-level notifications without requiring additional permissions. The device could freeze or crash if it is exploited, necessitating user intervention to restore functionality.

These flaws present serious threats. Hackers could gain access to sensitive information such as personal details, financial information, and so on. In other cases, they could circumvent the device's built-in security protections, running malicious code that jeopardises the system's integrity. In the worst-case situation, a hacker could crash the device, rendering it completely unusable. CERT-In has also confirmed that some of these flaws are actively abused by hackers, emphasising the need for users to act quickly. 

Apple has responded by releasing security upgrades to fix these vulnerabilities. It is highly recommended that impacted users update to the most latest version of iOS or iPadOS on their devices as soon as feasible. To defend against any threats, this update is critical. Additionally, users are cautioned against downloading suspicious or unverified apps as they could act as entry points for malware. It's also critical to monitor any unusual device behaviour as it may be related to a security risk. 

As Apple's footprint in India grows, it is more critical than ever that people remain informed and cautious. Regular software upgrades and sensible, cautious usage patterns are critical for guarding against the growing threat of cyber assaults. iPhone and iPad users can improve the security of their devices and sensitive data by taking proactive measures.

Here's Why Websites Are Offering "Ad-Lite" Premium Subscriptions

 

Some websites allow you to totally remove adverts after subscribing, while others now offer "ad-lite" memberships. However, when you subscribe to ad-supported streaming services, you do not get the best value. 

Not removing all ads

Ads are a significant source of income for many websites, despite the fact that they can be annoying. Additionally, a lot of websites are aware of ad-blockers, so outdated methods may no longer be as effective.

For websites, complete memberships without advertisements are a decent compromise because adverts aren't going away. The website may continue to make money to run while providing users with an ad-free experience. In this case, everybody wins. 

However, ad-lite subscriptions are not always the most cost-effective option. Rather than fully blocking adverts, you do not see personalised ads. While others may disagree, I can't see how this would encourage me to subscribe; I'd rather pay an extra few dollars per month to completely remove them. 

In addition to text-based websites, YouTube has tested a Premium Lite tool. Although not all videos are ad-free, the majority are. Subscribing makes no sense for me if the videos with advertisements are on topics I'm interested in. 

Using personal data 

Many websites will track your behaviour because many advertisements are tailored to your preferences. Advertisers can then use this information to recommend items and services that they believe you would be interested in.

Given that many people have been more concerned about their privacy in recent years, it's reasonable that some may wish to pay money to prevent having their data used. While this is occasionally the case, certain websites may continue to utilise your information even after you subscribe to an ad-lite tier. 

Websites continue to require user information in order to get feedback and improve their services. As a result, your data may still be used in certain scenarios. The key distinction is that it will rarely be used for advertising; while this may be sufficient for some, others may find it more aggravating. It is difficult to avoid being tracked online under any circumstances. You can still be tracked while browsing in incognito or private mode.

Use ad-free version

Many websites with ad-lite tiers also provide totally ad-free versions. When you subscribe to them, you will not receive any personalised or non-personalised advertisements. Furthermore, you frequently get access to exclusive and/or infinite content, allowing you to fully support your preferred publications. Rather than focussing on the price, evaluate how much value you'll gain from subscribing to an ad-free tier. It's usually less expensive than ad-lite. 

Getting an ad-lite membership is essentially the worst of everything you were attempting to avoid. You'll still get adverts, but they'll be less personal. Furthermore, you may see adverts on stuff you appreciate while paying for ad-free access to something you do not care about. It's preferable to pay for the complete version.

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp claims even it can not process private data

WhatsApp has introduced ‘Private Processing,’ a new tech that allows users to use advanced AI features by offloading tasks to privacy-preserving cloud servers, without exposing their chat to Meta. Meta claims even it cannot see the messages while processing them. The system employs encrypted cloud infrastructure and hardware-based isolation without making it visible to anyone, even Meta, or processing data. 

About private processing

For those who decide to use Private Processing, the system works in an obscure verification via the user’s WhatsApp client to confirm the user’s validity. 

Meta claims this system keeps WhatsApp’s end-to-end encryption intact while offering AI features in chats. However, the feature currently applies only to select use cases and excludes Meta’s broader AI deployments, including those used in India’s public service systems.

Private processing employs Trusted Execution Environments (TEEs) — safe virtual machines that use cloud infrastructure to keep AI requests hidden. 

About the system

  • Performs encryption of user requests from the system to the TEE utilizing end-to-end encryption
  • Encrypts user requests from the device to the TEE using end-to-end encryption
  • Restricts storage or logging of messages post-processing
  • Reports logs and binary images for external verification and audits

WhatsApp builds AI through wider privacy concerns

According to Meta, the Private processing is a response to privacy questions around AI and messaging. WhatsApp has now joined other companies like Apple that have introduced confidential AI computing models in the previous year. “To validate our implementation of these and other security principles, independent security researchers will be able to continuously verify our privacy and security architecture and its integrity,” Meta said.

It is similar to Apple’s private cloud computing in terms of public transparency and stateless processing. Currently, however, WhatsApp is using them only for select features. Apple, on the other hand, has declared plans to implement this model throughout all its AI tools, whereas WhatsApp has not made such claims, yet. 

WhatsApp says, “Private Processing uses anonymous credentials to authenticate users over OHTTP. This way, Private Processing can authenticate users to the Private Processing system but remains unable to identify them.”

ProtectEU and VPN Privacy: What the EU Encryption Plan Means for Online Security

 

Texting through SMS is pretty much a thing of the past. Most people today rely on apps like WhatsApp and Signal to share messages, make encrypted calls, or send photos—all under the assumption that our conversations are private. But that privacy could soon be at risk in the EU.

On April 1, 2025, the European Commission introduced a new plan called ProtectEU. Its goal is to create a roadmap for “lawful and effective access to data for law enforcement,” particularly targeting encrypted platforms. While messaging apps are the immediate focus, VPN services might be next. VPNs rely on end-to-end encryption and strict no-log policies to keep users anonymous. However, if ProtectEU leads to mandatory encryption backdoors or expanded data retention rules, that could force VPN providers to change how they operate—or leave the EU altogether. 

Proton VPN’s Head of Public Policy, Jurgita Miseviciute, warns that weakening encryption won’t solve security issues. Instead, she believes it would put users at greater risk, allowing bad actors to exploit the same access points created for law enforcement. Proton is monitoring the plan closely, hoping the EU will consider solutions that protect encryption. Surfshark takes a more optimistic view. Legal Head Gytis Malinauskas says the strategy still lacks concrete policy direction and sees the emphasis on cybersecurity as a potential boost for privacy tools like VPNs. Mullvad VPN isn’t convinced. 

Having fought against earlier EU proposals to scan private chats, Mullvad criticized ProtectEU as a rebranded version of old policies, expressing doubt it will gain wide support. One key concern is data retention. If the EU decides to require VPNs to log user activity, it could fundamentally conflict with their privacy-first design. Denis Vyazovoy of AdGuard VPN notes that such laws could make no-log VPNs unfeasible, prompting providers to exit the EU market—much like what happened in India in 2022. NordVPN adds that the more data retained, the more risk users face from breaches or misuse. 

Even though VPNs aren’t explicitly targeted yet, an EU report has listed them as a challenge to investigations—raising concerns about future regulations. Still, Surfshark sees the current debate as a chance to highlight the legitimate role VPNs play in protecting everyday users. While the future remains uncertain, one thing is clear: the tension between privacy and security is only heating up.

Best Encrypted Messaging Apps: Signal vs Telegram vs WhatsApp Privacy Guide

 

Encrypted messaging apps have become essential tools in the age of cyber threats and surveillance. With rising concerns over data privacy, especially after recent high-profile incidents, users are turning to platforms that offer more secure communication. Among the top contenders are Signal, Telegram, and WhatsApp—each with its own approach to privacy, encryption, and data handling. 

Signal is widely regarded as the gold standard when it comes to messaging privacy. Backed by a nonprofit foundation and funded through grants and donations, Signal doesn’t rely on user data for profit. It collects minimal information—just your phone number—and offers strong on-device privacy controls, like disappearing messages and call relays to mask IP addresses. Being open-source, Signal allows independent audits of its code, ensuring transparency. Even when subpoenaed, the app could only provide limited data like account creation date and last connection, making it a favorite among journalists, whistleblowers, and privacy advocates.  

Telegram offers a broader range of features but falls short on privacy. While it supports end-to-end encryption, this is limited only to its “secret chats,” and not enabled by default in regular messages or public channels. Telegram also stores metadata, such as IP addresses and contact info, and recently updated its privacy policy to allow data sharing with authorities under legal requests. Despite this, it remains popular for public content sharing and large group chats, thanks to its forum-like structure and optional paid features. 

WhatsApp, with over 2 billion users, is the most widely used encrypted messaging app. It employs the same encryption protocol as Signal, ensuring end-to-end protection for chats and calls. However, as a Meta-owned platform, it collects significant user data—including device information, usage logs, and location data. Even people not using WhatsApp can have their data collected via synced contacts. While messages remain encrypted, the amount of metadata stored makes it less privacy-friendly compared to Signal. 

All three apps offer some level of encrypted messaging, but Signal stands out for its minimal data collection, open-source transparency, and commitment to privacy. Telegram provides a flexible chat experience with weaker privacy controls, while WhatsApp delivers strong encryption within a data-heavy ecosystem. Choosing the best encrypted messaging app depends on what you prioritize more: security, features, or convenience.

Apple and Google App Stores Host VPN Apps Linked to China, Face Outrage

Apple and Google App Stores Host VPN Apps Linked to China, Face Outrage

Google (GOOGL) and Apple (AAPL) are under harsh scrutiny after a recent report disclosed that their app stores host VPN applications associated with a Chinese cybersecurity firm, Qihoo 360. The U.S government has blacklisted the firm. The Financial Times reports that 5 VPNs still available to U.S users, such as VPN Proxy master and Turbo VPN, are linked to Qihoo. It was sanctioned in 2020 on the charges of alleged military ties. 

Ilusion of Privacy: VPNs collecting data 

In 2025 alone, three VPN apps have had over a million downloads on Google Play and  Apple’s App Store, suggesting these aren’t small-time apps, Sensor Tower reports. They are advertised as “private browsing” tools, but the VPNs provide the companies with complete user data of their online activity. This is alarming because China’s national security laws mandate that companies give user data if the government demands it. 

Concerns around ownership structures

The intricate web of ownership structures raises important questions; the apps are run by Singapore-based Innovative Connecting, owned by Lemon Seed, a Cayman Islands firm. Qihoo acquired Lemon Seed for $69.9 million in 2020. The company claimed to sell the business months late, but FT reports the China-based team making the applications were still under Qihoo’s umbrella for years. According to FT, a developer said, “You could say that we’re part of them, and you could say we’re not. It’s complicated.”

Amid outrage, Google and Apple respond 

Google said it strives to follow sanctions and remove violators when found. Apple has removed two apps- Snap VPN and Thunder VPN- after FT contacted the business, claiming it follows strict rules on VPN data-sharing.

Privacy scare can damage stock valuations

What Google and Apple face is more than public outage. Investors prioritise data privacy, and regulatory threat has increased, mainly with growing concerns around U.S tech firms’ links to China. If the U.S government gets involved, it can result in stricter rules, fines, and even more app removals. If this happens, shareholders won’t be happy. 

According to FT, “Innovative Connecting said the content of the article was not accurate and declined to comment further. Guangzhou Lianchuang declined to comment. Qihoo and Chen Ningyi did not respond to requests for comment.”

Orion Brings Fully Homomorphic Encryption to Deep Learning for AI Privacy

 

As data privacy becomes an increasing concern, a new artificial intelligence (AI) encryption breakthrough could transform how sensitive information is handled. Researchers Austin Ebel, Karthik Garimella, and Assistant Professor Brandon Reagen have developed Orion, a framework that integrates fully homomorphic encryption (FHE) into deep learning. 

This advancement allows AI systems to analyze encrypted data without decrypting it, ensuring privacy throughout the process. FHE has long been considered a major breakthrough in cryptography because it enables computations on encrypted information while keeping it secure. However, applying this method to deep learning has been challenging due to the heavy computational requirements and technical constraints. Orion addresses these challenges by automating the conversion of deep learning models into FHE-compatible formats. 

The researchers’ study, recently published on arXiv and set to be presented at the 2025 ACM International Conference on Architectural Support for Programming Languages and Operating Systems, highlights Orion’s ability to make privacy-focused AI more practical. One of the biggest concerns in AI today is that machine learning models require direct access to user data, raising serious privacy risks. Orion eliminates this issue by allowing AI to function without exposing sensitive information. The framework is built to work with PyTorch, a widely used machine learning library, making it easier for developers to integrate FHE into existing models. 

Orion also introduces optimization techniques that reduce computational burdens, making privacy-preserving AI more efficient and scalable. Orion has demonstrated notable performance improvements, achieving speeds 2.38 times faster than previous FHE deep learning methods. The researchers successfully implemented high-resolution object detection using the YOLO-v1 model, which contains 139 million parameters—a scale previously considered impractical for FHE. This progress suggests Orion could enable encrypted AI applications in sectors like healthcare, finance, and cybersecurity, where protecting user data is essential. 

A key advantage of Orion is its accessibility. Traditional FHE implementations require specialized knowledge, making them difficult to adopt. Orion simplifies the process, allowing more developers to use the technology without extensive training. By open-sourcing the framework, the research team hopes to encourage further innovation and adoption. As AI continues to expand into everyday life, advancements like Orion could help ensure that technological progress does not come at the cost of privacy and security.