Cybersecurity researchers have uncovered a troubling risk tied to how popular AI chatbots answer basic questions. When asked where to log in to well-known websites, some of these tools may unintentionally direct users to the wrong places, putting their private information at risk.
Phishing is one of the oldest and most dangerous tricks in the cybercrime world. It usually involves fake websites that look almost identical to real ones. People often get an email or message that appears to be from a trusted company, like a bank or online store. These messages contain links that lead to scam pages. If you enter your username and password on one of these fake sites, the scammer gets full access to your account.
Now, a team from the cybersecurity company Netcraft has found that even large language models or LLMs, like the ones behind some popular AI chatbots, may be helping scammers without meaning to. In their study, they tested how accurately an AI chatbot could provide login links for 50 well-known companies across industries such as finance, retail, technology, and utilities.
The results were surprising. The chatbot gave the correct web address only 66% of the time. In about 29% of cases, the links led to inactive or suspended pages. In 5% of cases, they sent users to a completely different website that had nothing to do with the original question.
So how does this help scammers? Cybercriminals can purchase these unclaimed or inactive domain names, the incorrect ones suggested by the AI, and turn them into realistic phishing pages. If people click on them, thinking they’re going to the right site, they may unknowingly hand over sensitive information like their bank login or credit card details.
In one example observed by Netcraft, an AI-powered search tool redirected users who asked about a U.S. bank login to a fake copy of the bank’s website. The real link was shown further down the results, increasing the risk of someone clicking on the wrong one.
Experts also noted that smaller companies, such as regional banks and mid-sized fintech platforms, were more likely to be affected than global giants like Apple or Google. These smaller businesses may not have the same resources to secure their digital presence or respond quickly when problems arise.
The researchers explained that this problem doesn't mean the AI tools are malicious. However, these models generate answers based on patterns, not verified sources and that can lead to outdated or incorrect responses.
The report serves as a strong reminder: AI is powerful, but it is not perfect. Until improvements are made, users should avoid relying on AI-generated links for sensitive tasks. When in doubt, type the website address directly into your browser or use a trusted bookmark.
In a critical move for mobile security, Google is preparing to roll out a new feature in Android 16 that will help protect users from fake mobile towers, also known as cell site simulators, that can be used to spy on people without their knowledge.
These deceptive towers, often referred to as stingrays or IMSI catchers, are devices that imitate real cell towers. When a smartphone connects to them, attackers can track the user’s location or intercept sensitive data like phone calls, text messages, or even the phone's unique ID numbers (such as IMEI). What makes them dangerous is that users typically have no idea their phones are connected to a fraudulent network.
Stingrays usually exploit older 2G networks, which lack strong encryption and tower authentication. Even if a person uses a modern 4G or 5G connection, their device can still switch to 2G if the signal is stronger opening the door for such attacks.
Until now, Android users had very limited options to guard against these silent threats. The most effective method was to manually turn off 2G network support—something many people aren’t aware of or don’t know how to do.
That’s changing with Android 16. According to public documentation on the Android Open Source Project, the operating system will introduce a “network security warning” feature. When activated, it will notify users if their phone connects to a mobile network that behaves suspiciously, such as trying to extract device identifiers or downgrade the connection to an unsecured one.
This feature will be accessible through the “Mobile Network Security” settings, where users can also manage 2G-related protections. However, there's a catch: most current Android phones, including Google's own Pixel models, don’t yet have the hardware required to support this function. As a result, the feature is not yet visible in settings, and it’s expected to debut on newer devices launching later this year.
Industry observers believe that this detection system might first appear on the upcoming Pixel 10, potentially making it one of the most security-focused smartphones to date.
While stingray technology is sometimes used by law enforcement agencies for surveillance under strict regulations, its misuse remains a serious privacy concern especially if such tools fall into the wrong hands.
With Android 16, Google is taking a step toward giving users more control and awareness over the security of their mobile connections. As surveillance tactics become more advanced, these kinds of features are increasingly necessary to protect personal privacy.