Google’s Gmail is now offering two new upgrades, but here’s the catch— they don’t work well together. This means Gmail’s billions of users are being asked to pick a side: better privacy or smarter features. And this decision could affect how their emails are handled in the future.
Let’s break it down. One upgrade focuses on stronger protection of your emails, which works like advanced encryption. This keeps your emails private, even Google won’t be able to read them. The second upgrade brings in artificial intelligence tools to improve how you search and use Gmail, promising quicker, more helpful results.
But there’s a problem. If your emails are fully protected, Gmail’s AI tools can’t read them to include in its search results. So, if you choose privacy, you might lose out on the benefits of smarter searches. On the other hand, if you want AI help, you’ll need to let Google access more of your email content.
This challenge isn’t unique to Gmail. Many tech companies are trying to combine stronger security with AI-powered features, but the two don’t always work together. Apple tried solving this with a system that processes data securely on your device. However, delays in rolling out their new AI tools have made their solution uncertain for now.
Some reports explain the choice like this: if you turn on AI features, Google will use your data to power smart tools. If you turn it off, you’ll have better privacy, but lose some useful options. The real issue is that opting out isn’t always easy. Some settings may remain active unless you manually turn them off, and fully securing your emails still isn’t simple.
Even when extra security is enabled, email systems have limitations. For example, Apple’s iCloud Mail doesn’t use full end-to-end encryption because it must work with global email networks. So even private emails may not be completely safe.
This issue goes beyond Gmail. Other platforms are facing similar challenges. WhatsApp, for example, added a privacy mode that blocks saving chats and media, but also limits AI-related features. OpenAI’s ChatGPT can now remember what you told it in past conversations, which may feel helpful but also raises questions about how your personal data is being stored.
In the end, users need to think carefully. AI tools can make email more useful, but they come with trade-offs. Email has never been a perfectly secure space, and with smarter AI, new threats like scams and data misuse may grow. That’s why it’s important to weigh both sides before making a choice.
Cyberattacks are changing. In the past, hackers would lock your files and show a big message asking for money. Now, a new type of attack is becoming more common. It’s called “quiet ransomware,” and it can steal your private information without you even knowing.
Last year, a small bakery in the United States noticed that their billing machine was charging customers a penny less. It seemed like a tiny error. But weeks later, they got a strange message. Hackers claimed they had copied the bakery’s private recipes, financial documents, and even camera footage. The criminals demanded a large payment or they would share everything online. The bakery was shocked— they had no idea their systems had been hacked.
What Is Quiet Ransomware?
This kind of attack is sneaky. Instead of locking your data, the hackers quietly watch your system. They take important information and wait. Then, they ask for money and threaten to release the stolen data if you don’t pay.
How These Attacks Happen
1. The hackers find a weak point, usually in an internet-connected device like a smart camera or printer.
2. They get inside your system and look through your files— emails, client details, company plans, etc.
3. They make secret copies of this information.
4. Later, they contact you, demanding money to keep the data private.
Why Criminals Use This Method
1. It’s harder to detect, since your system keeps working normally.
2. Many companies prefer to quietly pay, instead of risking their reputation.
3. Devices like smart TVs, security cameras, or smartwatches are rarely updated or checked, making them easy to break into.
Real Incidents
One hospital had its smart air conditioning system hacked. Through it, criminals stole ten years of patient records. The hospital paid a huge amount to avoid legal trouble.
In another case, a smart fitness watch used by a company leader was hacked. This gave the attackers access to emails that contained sensitive information about the business.
How You Can Stay Safe
1. Keep smart devices on a different network than your main systems.
2. Turn off features like remote access or cloud backups if they are not needed.
3. Use security tools that limit what each device can do or connect to.
Today, hackers don’t always make noise. Sometimes they hide, watch, and strike later. Anyone using smart devices should be careful. A simple gadget like a smart light or thermostat could be the reason your private data gets stolen. Staying alert and securing all devices is more important than ever.
In 2025 alone, three VPN apps have had over a million downloads on Google Play and Apple’s App Store, suggesting these aren’t small-time apps, Sensor Tower reports. They are advertised as “private browsing” tools, but the VPNs provide the companies with complete user data of their online activity. This is alarming because China’s national security laws mandate that companies give user data if the government demands it.
The intricate web of ownership structures raises important questions; the apps are run by Singapore-based Innovative Connecting, owned by Lemon Seed, a Cayman Islands firm. Qihoo acquired Lemon Seed for $69.9 million in 2020. The company claimed to sell the business months late, but FT reports the China-based team making the applications were still under Qihoo’s umbrella for years. According to FT, a developer said, “You could say that we’re part of them, and you could say we’re not. It’s complicated.”
Google said it strives to follow sanctions and remove violators when found. Apple has removed two apps- Snap VPN and Thunder VPN- after FT contacted the business, claiming it follows strict rules on VPN data-sharing.
What Google and Apple face is more than public outage. Investors prioritise data privacy, and regulatory threat has increased, mainly with growing concerns around U.S tech firms’ links to China. If the U.S government gets involved, it can result in stricter rules, fines, and even more app removals. If this happens, shareholders won’t be happy.
According to FT, “Innovative Connecting said the content of the article was not accurate and declined to comment further. Guangzhou Lianchuang declined to comment. Qihoo and Chen Ningyi did not respond to requests for comment.”
A popular trend is taking over social media, where users are sharing cartoon-like pictures of themselves inspired by the art style of Studio Ghibli. These fun, animated portraits are often created using tools powered by artificial intelligence, like ChatGPT-4o. From Instagram to Facebook, users are posting these images excitedly. Big entrepreneurs and celebrities have partaken in this global trend, Sam Altman and Elon Musk to name a few.
But behind the charm of these AI filters lies a serious concern— your face is being collected and stored, often without your full understanding or consent.
What’s Really Happening When You Upload Your Face?
Each time someone uploads a photo or gives camera access to an app, they may be unknowingly allowing tech companies to capture their facial features. These features become part of a digital profile that can be stored, analyzed, and even sold. Unlike a password that you can change, your facial data is permanent. Once it’s out there, it’s out for good.
Many people don’t realize how often their face is scanned— whether it’s to unlock their phone, tag friends in photos, or try out AI tools that turn selfies into artwork. Even images of children and family members are being uploaded, putting their privacy at risk too.
Real-World Cases Show the Dangers
In one well-known case, a company named Clearview AI was accused of collecting billions of images from social platforms and other websites without asking permission. These were then used to create a massive database for law enforcement and private use.
In another incident, an Australian tech company called Outabox suffered a breach in May 2024. Over a million people had their facial scans and identity documents leaked. The stolen data was used for fraud, impersonation, and other crimes.
Retail stores using facial recognition to prevent theft have also become targets of cyberattacks. Once stolen, this kind of data is often sold on hidden parts of the internet, where it can be used to create fake identities or manipulate videos.
The Market for Facial Recognition Is Booming
Experts say the facial recognition industry will be worth over $14 billion by 2031. As demand grows, concerns about how companies use our faces for training AI tools without transparency are also increasing. Some websites can even track down a person’s online profile using just a picture.
How to Protect Yourself
To keep your face and personal data safe, it’s best to avoid viral image trends that ask you to upload clear photos. Turn off unnecessary camera permissions, don’t share high-resolution selfies, and choose passwords or PINs over face unlock for your devices.
These simple steps can help you avoid falling into the trap of giving away something as personal as your identity. Before sharing an AI-edited selfie, take a moment to think— are a few likes worth risking your privacy? Rather respect art and the artists who spend years perfecting their craft and maybe consider commissioning a portrait if you're that enthusiastic about it.