Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Data. Show all posts

Indian Government Proposes Compulsory Location Tracking in Smartphones, Faces Backlash


Government faces backlash over location-tracking proposal

The Indian government is pushing a telecom industry proposal that will compel smartphone companies to allow satellite location tracking that will be activated 24x7 for surveillance. 

Tech giants Samsung, Google, and Apple have opposed this move due to privacy concerns. Privacy debates have stirred in India after the government was forced to repeal an order that mandated smartphone companies to pre-install a state run cyber safety application on all devices. Activists and opposition raised concerns about possible spying. 

About the proposal 

Recently, the government had been concerned that agencies didn't get accurate locations when legal requests were sent to telecom companies during investigations. Currently, the firm only uses cellular tower data that provides estimated area location, this can be sometimes inaccurate.

The Cellular Operators Association of India (COAI) representing Bharti Airtel and Reliance Jio suggested accurate user locations be provided if the government mandates smartphone firms to turn on A-GPS technology which uses cellular data and satellite signals.

Strong opposition from tech giants 

If this is implemented, location services will be activated in smartphones with no disable option. Samsung, Google, and Apple strongly oppose this proposal. A proposal to track user location is not present anywhere else in the world, according to lobbying group India Cellular & Electronics Association (ICEA), representing Google and Apple. 

Reuters reached out to the India's IT and home ministries for clarity on the telecom industry's proposal but have received no replies. According to digital forensics expert Junade Ali, the "proposal would see phones operate as a dedicated surveillance device." 

According to technology experts, utilizing A-GPS technology, which is normally only activated when specific apps are operating or emergency calls are being made, might give authorities location data accurate enough to follow a person to within a meter.  

Telecom vs government 

Globally, governments are constantly looking for new ways to improve in tracking the movements or data of mobile users. All Russian mobile phones are mandated to have a state-sponsored communications app installed. With 735 million smartphones as of mid-2025, India is the second-largest mobile market in the world. 

According to Counterpoint Research, more than 95% of these gadgets are running Google's Android operating system, while the remaining phones are running Apple's iOS. 

Apple and Google cautioned that their user base will include members of the armed forces, judges, business executives, and journalists, and that the proposed location tracking would jeopardize their security because they store sensitive data.

According to the telecom industry, even the outdated method of location tracking is becoming troublesome because smartphone manufacturers notify users via pop-up messages that their "carrier is trying to access your location."



700+ Self-hosted Gits Impacted in a Wild Zero-day Exploit


Hackers actively exploit zero-day bug

Threat actors are abusing a zero-day bug in Gogs- a famous self-hosted Git service. The open source project hasn't fixed it yet.

About the attack 

Over 700 incidents have been impacted in these attacks. Wiz researchers described the bug as "accidental" and said the attack happened in July when they were analyzing malware on a compromised system. During the investigation, the experts "identified that the threat actor was leveraging a previously unknown flaw to compromise instances. They “responsibly disclosed this vulnerability to the maintainers."

The team informed Gogs' maintainers about the bug, who are now working on the fix. 

The flaw is known as CVE-2025-8110. It is primarily a bypass of an earlier patched flaw (CVE-2024-55947) that lets authorized users overwrite external repository files. This leads to remote code execution (RCE). 

About Gogs

Gogs is written in Go, it lets users host Git repositories on their cloud infrastructure or servers. It doesn't use GitHub or other third parties. 

Git and Gogs allow symbolic links that work as shortcuts to another file. They can also point to objects outside the repository. The Gogs API also allows file configuration outside the regular Git protocol. 

Patch update 

The previous patch didn't address such symbolic links exploit and this lets threat actors to leverage the flaw and remotely deploy malicious codes. 

While researchers haven't linked the attacks to any particular gang or person, they believe the threat actors are based in Asia.

Other incidents 

Last year, Mandiant found Chinese state-sponsored hackers abusing a critical flaw in F5 through Supershell, and selling the access to impacted UK government agencies, US defense organizations, and others.

Researchers still don't know what threat actors are doing with access to compromised incidents. "In the environments where we have visibility, the malware was removed quickly so we did not see any post-exploitation activity. We don't have visibility into other compromised servers, beyond knowing they're compromised," researchers said.

How to stay safe?

Wiz has advised users to immediately disable open-registration (if not needed) and control internet exposure by shielding self-hosted Git services via VPN. Users should be careful of new repositories with unexpected usage of the PutContents API or random 8-character names. 

For more details, readers can see the full list of indicators published by the researchers.



End to End-to-end Encryption? Google Update Allows Firms to Read Employee Texts


Your organization can now read your texts

Microsoft stirred controversy when it revealed a Teams update that could tell your organization when you're not at work. Google did the same. Say goodbye to end-to-end encryption. With this new RCS and SMS Android update, your RCS and SMS texts are no longer private. 

According to Android Authority, "Google is rolling out Android RCS Archival on Pixel (and other Android) phones, allowing employers to intercept and archive RCS chats on work-managed devices. In simpler terms, your employer will now be able to read your RCS chats in Google Messages despite end-to-end encryption.”

Only for organizational devices 

This is only applicable to work-managed devices and doesn't impact personal devices. In regulated industries, it will only add RCS archiving to existing SMS archiving. In an organization, however, texting is different than emailing. In the former, employees sometimes share about their non-work life. End-to-end encryptions keep these conversations safe, but this will no longer be the case.

The end-to-end question 

There is alot of misunderstanding around end-to-end encryption. It protects messages when they are being sent, but once they are on your device, they are decrypted and no longer safe. 

According to Google, this is "a dependable, Android-supported solution for message archival, which is also backwards compatible with SMS and MMS messages as well. Employees will see a clear notification on their device whenever the archival feature is active.”

What will change?

With this update, getting a phone at work is no longer as good as it seems. Employees have always been insecure about the risks in over-sharing on email, as it is easy to spy. But not texts. 

The update will make things different. According to Google, “this new capability, available on Google Pixel and other compatible Android Enterprise devices gives your employees all the benefits of RCS — like typing indicators, read receipts, and end-to-end encryption between Android devices — while ensuring your organization meets its regulatory requirements.”

Promoting organizational surveillance 

Because of organizational surveillance, employees at times turn to shadow IT systems such as Whatsapp and Signal to communicate with colleagues. The new Google update will only make things worse. 

“Earlier,” Google said, ““employers had to block the use of RCS entirely to meet these compliance requirements; this update simply allows organizations to support modern messaging — giving employees messaging benefits like high-quality media sharing and typing indicators — while maintaining the same compliance standards that already apply to SMS messaging."

Beer Firm Asahi Not Entertaining Threat Actors After Cyberattack


Asahi denies ransom payment 

Japanese beer giant Asahi said that it didn't receive any particular ransom demand from threat actors responsible for an advanced and sophisticated cyberattack that could have exposed the data of more than two million people. 

About the attack

CEO Atsushi Katsuki in a press conference said that the company had not been in touch with the threat actors. But Asahi has delayed the release of financial results. Even if the company received a ransom demand, it would not have paid, Katsuki said. Asahi Super Dry is one of Japan's most popular beers. Asahi suffered a cyberattack on 29th September. However, the company clarified on October 3 that it was hit by a ransomware attack.

Attack tactic 

In such incidents, threat actors typically use malicious software to encrypt the target's systems and then ask ransom for providing encryption keys to run the systems again.

Asahi said threat actors could have hacked or stolen identity data like phone numbers and names of around two million people- employees, customers and families.

Qilin gang believed to be responsible 

The firm didn't disclose details of the attacker at the conference. Later, it told AFP via mail that experts hinted towards a high chance of attack by hacking group Qilin. The gang issued a statement that the Japanese media understood as a claim of responsibility. Commenting on the situation, 

Katsuki said the firm thought it had taken needed measures to prevent such an incident. "But this attack was beyond our imagination. It was a sophisticated and cunning attack," Katsuki said. 

Impact on Asahi business 

Interestingly, Asahi delayed the release of third-quarter earnings and recently said that the annual financial results had also been delayed. "These and further information on the impact of the hack on overall corporate performance will be disclosed as soon as possible once the systems have been restored and the relevant data confirmed," the firm said.

The product supply hasn't been affected. Shipments will resume in stages while systems recover. "We apologise for the continued inconvenience and appreciate your understanding," Asahi said.

Critical Vulnerabilities Found in React Server Components and Next.js


Open in the wild flaw

The US Cybersecurity and Infrastructure Security Agency (CISA) added a critical security flaw affecting React Server Components (RSC) to its Known Exploited Vulnerabilities (KEV) catalog after exploitation in the wild.

The flaw CVE-2025-55182 (CVSS score: 10.0) or React2Shell hints towards a remote code execution (RCE) that can be triggered by an illicit threat actor without needing any setup. 

Remote code execution 

According to the CISA advisory, "Meta React Server Components contains a remote coThe incident surfaced when Amazon said it found attack attempts from infrastructure related to Chinese hacking groupsde execution vulnerability that could allow unauthenticated remote code execution by exploiting a flaw in how React decodes payloads sent to React Server Function endpoints."

The problem comes from unsafe deserialization in the library's Flight protocol, which React uses to communicate between a client and server. It results in a case where an unauthorised, remote hacker can deploy arbitrary commands on the server by sending specially tailored HTTP requests. The conversion of text into objects is considered a dangerous class of software vulnerability. 

About the flaw

 "The React2Shell vulnerability resides in the react-server package, specifically in how it parses object references during deserialization," said Martin Zugec, technical solutions director at Bitdefender.

The incident surfaced when Amazon said it found attack attempts from infrastructure related to Chinese hacking groups such as Jackpot Panda and Earth Lamia. "Within hours of the public disclosure of CVE-2025-55182 (React2Shell) on December 3, 2025, Amazon threat intelligence teams observed active exploitation attempts by multiple China state-nexus threat groups, including Earth Lamia and Jackpot Panda," AWS said.

Attack tactic 

Few attacks deployed cryptocurrency miners and ran "cheap math" PowerShell commands for successful exploitation. After that, it dropped in-memory downloaders capable of taking out extra payload from a remote server.

According to Censys, an attack surface management platform, 2.15 million cases of internet-facing services may be affected by this flaw. This includes leaked web services via React Server Components and leaked cases of frameworks like RedwoodSDK, React Router, Waku, and Next.js.

According to data shared by attack surface management platform Censys, there are about 2.15 million instances of internet-facing services that may be affected by this vulnerability. This comprises exposed web services using React Server Components and exposed instances of frameworks such as Next.js, Waku, React Router, and RedwoodSDK.


Scammers Used Fake WhatsApp Profiles of District Collectors in Kerala


Scammers target government officials 

In a likely phishing attempt, over four employees of Kasaragod and Wayanad Collectorates received WhatsApp texts from accounts imitating their district Collectors and asking for urgent money transfers. After that, the numbers have been sent to the cyber police, according to the Collectorate officials. 

Vietnam scammers behind the operation 

The texts came from Vietnam based numbers but showed the profile pictures of concerned collectors, Inbasekar K in Kasaragod and D R Meghasree. 

In one incident, the scammers also shared a Google Pay number, but the target didn't proceed. According to the official, "the employees who received the messages were saved simply because they recognised the Collector’s tone and style of communication." 

Two employees from Wayanad received texts, all from different numbers from Vietnam. In the Kasaragod incident, Collector Inbasekar said a lot of employees received the phishing texts on WhatsApp. Two employees reported the incident. No employee lost the money. 

Scammers used typical scripts

The scam used a similar script in the two districts. The first text read: Hello, how are you? Where are you currently? In the Wayanad incident, the first massage was sent around 4 pm, and in Kasaragod, around 5:30 pm. When the employee replied, a follow up text was sent: Very good. Please do something urgently. This shows that the scam followed the typical pitches used by scammers. 

The numbers have been reported to the cyber police. According to Wayanad officials, "Once the messages were identified as fake, screenshots were immediately circulated across all internal WhatsApp groups." Cyber Unit has blocked both Vietnam-linked and Google Pay numbers.

What needs to be done?

Kasaragod Collector cautioned the public and staff to be careful when getting texts asking for money transfers. Coincidentally, in both the incidents, the texts were sent to staff employed in the Special Intensive Revision of electoral rolls. In this pursuit, the scammers revealed the pressures under which booth-level employees are working.

According to cyber security experts, the fake identity scams are increasingly targeting top government officials. Scammers are exploiting hierarchical structures to trick officials into acting promptly. “Police have urged government employees and the public to avoid responding to unsolicited WhatsApp messages requesting money, verify communication through official phone numbers or email, and report suspicious messages immediately to cybercrime authorities,” the New Indian Express reported.

AI Models Trained on Incomplete Data Can't Protect Against Threats


In cybersecurity, AI is being called the future of threat finder. However, AI has its hands tied, they are only as good as their data pipeline. But this principle is not stopping at academic machine learning, as it is also applicable for cybersecurity.

AI-powered threat hunting will only be successful if the data infrastructure is strong too.

Threat hunting powered by AI, automation, or human investigation will only ever be as effective as the data infrastructure it stands on. Sometimes, security teams build AI over leaked data or without proper data care. This can create issues later. It can affect both AI and humans. Even sophisticated algorithms can't handle inconsistent or incomplete data. AI that is trained on poor data will also lead to poor results. 

The importance of unified data 

A correlated data controls the operation. It reduces noise and helps in noticing patterns that manual systems can't.

Correlating and pre-transforming the data makes it easy for LLMs and other AI tools. It also allows connected components to surface naturally. 

A same person may show up under entirely distinct names as an IAM principal in AWS, a committer in GitHub, and a document owner in Google Workspace. You only have a small portion of the truth when you look at any one of those signs. 

You have behavioral clarity when you consider them collectively. While downloading dozens of items from Google Workspace may look strange on its own, it becomes obviously malevolent if the same user also clones dozens of repositories to a personal laptop and launches a public S3 bucket minutes later.

Finding threat via correlation 

Correlations that previously took hours or were impossible become instant when data from logs, configurations, code repositories, and identification systems are all housed in one location. 

For instance, lateral movement that uses short-lived credentials that have been stolen frequently passes across multiple systems before being discovered. A hacked developer laptop might take on several IAM roles, launch new instances, and access internal databases. Endpoint logs show the local compromise, but the extent of the intrusion cannot be demonstrated without IAM and network data.


How Spyware Steals Your Data Without You Knowing About It


You might not be aware that your smartphone has spyware, which poses a risk to your privacy and personal security. However, what exactly is spyware? 

This type of malware, often presented as a trustworthy mobile application, has the potential to steal your data, track your whereabouts, record conversations, monitor your social media activity, take screenshots of your activities, and more. Phishing, a phony mobile application, or a once-reliable software that was upgraded over the air to become an information thief are some of the ways it could end up on your phone.

Types of malware

Legitimate apps are frequently packaged with nuisanceware. It modifies your homepage or search engine settings, interrupts your web browsing with pop-ups, and may collect your browsing information to sell to networks and advertising agencies.

Nuisanceware

Nuisanceware is typically not harmful or a threat to your fundamental security, despite being seen as malvertising. Rather, many malware packages focus on generating revenue by persuading users to view or click on advertisements.

Generic mobile spyware

Additionally, there is generic mobile spyware. These types of malware collect information from the operating system and clipboard in addition to potentially valuable items like account credentials or bitcoin wallet data. Spray-and-pray phishing attempts may employ spyware, which isn't always targeted.

Stalkerware

Compared to simple spyware, advanced spyware is sometimes also referred to as stalkerware. This spyware, which is unethical and frequently harmful, can occasionally be found on desktop computers but is becoming more frequently installed on phones.

The infamous Pegasus

Lastly, there is commercial spyware of governmental quality. One of the most popular variations is Pegasus, which is sold to governments as a weapon for law enforcement and counterterrorism. 

Pegasus was discovered on smartphones owned by lawyers, journalists, activists, and political dissidents. Commercial-grade malware is unlikely to affect you unless you belong to a group that governments with ethical dilemmas are particularly interested in. This is because commercial-grade spyware is expensive and requires careful victim selection and targeting.

How to know if spyware is on your phone?

There are signs that you may be the target of a spyware or stalkerware operator.

Receiving strange or unexpected emails or messages on social media could be a sign of a spyware infection attempt. You should remove these without downloading any files or clicking any links.

User Privacy:Is WhatsApp Not Safe to Use?


WhatsApp allegedly collects data

The mega-messenger from Meta is allegedly collecting user data to generate ad money, according to recent attacks on WhatsApp. WhatsApp strongly opposes these fresh accusations, but it didn't help that a message of its own appeared to imply the same.  

The allegations 

There are two prominent origins of the recent attacks. Few experts are as well-known as Elon Musk, particularly when it occurs on X, the platform he owns. Musk asserted on the Joe Rogan Experience that "WhatsApp knows enough about what you're texting to know what ads to show you." "That is a serious security flaw."

These so-called "hooks for advertising" are typically thought to rely on metadata, which includes information on who messages whom, when, and how frequently, as well as other information from other sources that is included in a user's profile.  

End-to-end encryption 

The message content itself is shielded by end-to-end encryption, which is the default setting for all 3 billion WhatsApp users. Signal's open-source encryption protocol, which the Meta platform adopted and modified for its own use, is the foundation of WhatsApp's security. So, in light of these new attacks, do you suddenly need to stop using WhatsApp?

In reality, WhatsApp's content is completely encrypted. There has never been any proof that Meta, WhatsApp, or anybody else can read the content itself. However, the platform you are utilizing is controlled by Meta, and it is aware of your identity. It does gather information on how you use the platform.  

How user data is used 

Additionally, it shares information with Meta so that it can "show relevant offers/ads." Signal has a small portion of WhatsApp's user base, but it does not gather metadata in the same manner. Think about using Signal instead for sensitive content. Steer clear of Telegram since it is not end-to-end encrypted and RCS because it is not yet cross-platform encrypted.

Remember that end-to-end encryption only safeguards your data while it is in transit. It has no effect on the security of your content on the device. I can read all of your messages, whether or not they are end-to-end encrypted, if I have control over your iPhone or Android.

TP-Link Routers May Get Banned in US Due to Alleged Links With China


TP-Link routers may soon shut down in the US. There's a chance of potential ban as various federal agencies have backed the proposal. 

Alleged links with China

The news first came in December last year. According to the WSJ, officials at the Departments of Justice, Commerce, and Defense had launched investigations into the company due to national security threats from China. 

Currently, the proposal has gotten interagency approval. According to the Washington Post, "Commerce officials concluded TP-Link Systems products pose a risk because the US-based company's products handle sensitive American data and because the officials believe it remains subject to jurisdiction or influence by the Chinese government." 

But TP-Link's connections to the Chinese government are not confirmed. The company has denied of any ties with being a Chinese company. 

About TP-Link routers 

The company was founded in China in 1996. After the October 2024 investigation, the company split into two: TP-Link Systems and TP-Link Technologies. "TP-Link's unusual degree of vulnerabilities and required compliance with [Chinese] law are in and of themselves disconcerting. When combined with the [Chinese] government's common use of [home office] routers like TP-Link to perpetrate extensive cyberattacks in the United States, it becomes significantly alarming" the officials wrote in October 2024. 

The company dominated the US router market since the COVID pandemic. It rose from 20% of total router sales to 65% between 2019 and 2025. 

Why the investigation?

The US DoJ is investigating if TP-Link was involved in predatory pricing by artificially lowering its prices to kill the competition. 

The potential ban is due to an interagency review and is being handled by the Department of Commerce. Experts say that the ban may be lifted in future due to Trump administration's ongoing negotiations with China. 

Hackers Exploit AI Stack in Windows to Deploy Malware


The artificial intelligence (AI) stack built into Windows can act as a channel for malware transmission, a recent study has demonstrated.

Using AI in malware

Security researcher hxr1 discovered a far more conventional method of weaponizing rampant AI in a year when ingenious and sophisticated quick injection tactics have been proliferating. He detailed a living-off-the-land attack (LotL) that utilizes trusted files from the Open Neural Network Exchange (ONNX) to bypass security engines in a proof-of-concept (PoC) provided exclusively to Dark Reading.

Impact on Windows

Programs for cybersecurity are only as successful as their designers make them. Because these are known signs of suspicious activity, they may detect excessive amounts of data exfiltrating from a network or a foreign.exe file that launches. However, if malware appears on a system in a way they are unfamiliar with, they are unlikely to be aware of it.

That's the reason AI is so difficult. New software, procedures, and systems that incorporate AI capabilities create new, invisible channels for the spread of cyberattacks.

Why AI in malware is a problem

The Windows operating system has been gradually including features since 2018 that enable apps to carry out AI inference locally without requiring a connection to a cloud service. Inbuilt AI is used by Windows Hello, Photos, and Office programs to carry out object identification, facial recognition, and productivity tasks, respectively. They accomplish this by making a call to the Windows Machine Learning (ML) application programming interface (API), which loads ML models as ONNX files.

ONNX files are automatically trusted by Windows and security software. Why wouldn't they? Although malware can be found in EXEs, PDFs, and other formats, no threat actors in the wild have yet to show that they plan to or are capable of using neural networks as weapons. However, there are a lot of ways to make it feasible.

Attack tactic

Planting a malicious payload in the metadata of a neural network is a simple way to infect it. The compromise would be that this virus would remain in simple text, making it much simpler for a security tool to unintentionally detect it.

Piecemeal malware embedding among the model's named nodes, inputs, and outputs would be more challenging but more covert. Alternatively, an attacker may utilize sophisticated steganography to hide a payload inside the neural network's own weights.

As long as you have a loader close by that can call the necessary Windows APIs to unpack it, reassemble it in memory, and run it, all three approaches will function. Additionally, both approaches are very covert. Trying to reconstruct a fragmented payload from a neural network would be like trying to reconstruct a needle from bits of it spread through a haystack.

Chinese Hackers Attack Prominent U.S Organizations


Chinese cyber-espionage groups attacked U.S organizations with links to international agencies. This has now become a problem for the U.S, as state-actors from China keep attacking.  Attackers were trying to build a steady presence inside the target network.

Series of attacks against the U.S organizations 

Earlier this year, the breach was against a famous U.S non-profit working in advocacy, that demonstrated advanced techniques and shared tools among Chinese cyber criminal gangs like APT41, Space Pirates, and Kelp.

They struck again in April with various malicious prompts checking both internal network breach and internet connectivity, particularly targeting a system at 192.0.0.88. Various tactics and protocols were used, showing both determination and technical adaptability to get particular internal resources.

Attack tactics 

Following the connectivity tests, the hackers used tools like netstat for network surveillance and made an automatic task via the Windows command-line tools.

This task ran a genuine MSBuild.exe app that processed an outbound.xml file to deploy code into csc.exe and connected it to a C2 server. 

These steps hint towards automation (through scheduled tasks) and persistence via system-level privileges increasing the complexity of the compromise and potential damage.

Espionage methods 

The techniques and toolkit show traces of various Chinese espionage groups. The hackers weaponized genuine software elements. This is called DLL sideloading by abusing vetysafe.exe (a VipreAV component signed by Sunbelt Software, Inc.) to load a malicious payload called sbamres.dll.

This tactic was earlier found in campaigns lkmkedytl Earth Longzhi and Space Pirates, the former also known as APT41 subgroup.

Coincidentally, the same tactic was found in cases connected to Kelp, showing the intrusive tool-sharing tactics within Chinese APTs.

The Risks of AI-powered Web Browsers for Your Privacy


AI and web browser

The future of browsing is AI, it watches everything you do online. Security and privacy are two different things; they may look same, but it is different for people who specialize in these two. Threats to your security can also be dangers to privacy. 

Threat for privacy and security

Security and privacy aren’t always the same thing, but there’s a reason that people who specialize in one care deeply about the other. 

Recently, OpenAI released its ChatGPT-powered Comet Browser, and Brave Software team disclosed that AI-powered browsers can follow malicious prompts that hide in images on the web. 

AI powered browser good or bad?

We have long known that AI-powered browsers (and AI browser add-ons for other browsers) are vulnerable to a type of attack known as a prompt injection attack. But this is the first time we've seen the browser execute commands that are concealed from the user. 

That is the aspect of security. Experts who evaluated the Comet Browser discovered that it records everything you do while using it, including search and browser history as well as information about the URLs you visit. 

What next?

In short, while new AI-powered browser tools do fulfill the promise of integrating your favorite chatbot into your web browsing experience, their developers have not yet addressed the privacy and security threats they pose. Be careful when using these.

Researchers studied the ten biggest VPN attacks in recent history. Many of them were not even triggered by foreign hostile actors; some were the result of basic human faults, such as leaked credentials, third-party mistakes, or poor management.

Atlas: AI powered web browser

Atlas, an AI-powered web browser developed with ChatGPT as its core, is meant to do more than just allow users to navigate the internet. It is capable of reading, sum up, and even finish internet tasks for the user, such as arranging appointments or finding lodgings.

Atlas looked for social media posts and other websites that mentioned or discussed the story. For the New York Times piece, a summary was created utilizing information from other publications such as The Guardian, The Washington Post, Reuters, and The Associated Press, all of which have partnerships or agreements with OpenAI, with the exception of Reuters.

Microsoft Warns Windows 10 Users: Hackers Target Outdated Systems

Microsoft Warns Windows 10 Users: Hackers Target Outdated Systems

Modern cyberattacks rarely target the royal jewels.  Instead, they look for flaws in the systems that control the keys, such as obsolete operating systems, aging infrastructure, and unsupported endpoints.  For technical decision makers (TDMs), these blind spots are more than just an IT inconvenience.  They pose significant hazards to data security, compliance, and enterprise control.

Dangers of outdated windows 10

With the end of support for Windows 10 approaching, many businesses are asking themselves how many of their devices, servers, or endpoints are already (or will soon be) unsupported.  More importantly, what hidden weaknesses does this introduce into compliance, auditability, and access governance?

Most IT leaders understand the urge to keep outdated systems running for a little longer, patch what they can, and get the most value out of the existing infrastructure.

Importance of system updates

However, without regular upgrades, endpoint security technologies lose their effectiveness, audit trails become more difficult to maintain, and compliance reporting becomes a game of guesswork. 

Research confirms the magnitude of the problem.  According to Microsoft's newest Digital Defense Report, more than 90% of ransomware assaults that reach the encryption stage originate on unmanaged devices that lack sufficient security controls.  

Unsupported systems frequently fall into this category, making them ideal candidates for exploitation.  Furthermore, because these vulnerabilities exist at the infrastructure level rather than in individual files, they are frequently undetectable until an incident happens.

Attack tactic

Hackers don't have to break your defense. They just need to wait for you to leave a window open. With the end of support for Windows 10 approaching, hackers are already predicting that many businesses will fall behind. 

Waiting carries a high cost. Breaches on unsupported infrastructure can result in higher cleanup costs, longer downtime, and greater reputational harm than attacks on supported systems. Because compliance frameworks evolve quicker than legacy systems, staying put risks falling behind on standards that influence contracts, customer trust, and potentially your ability to do business.

What next?

Although unsupported systems may appear to be small technical defects, they quickly escalate into enterprise-level threats. The longer they remain in play, the larger the gap they create in endpoint security, compliance, and overall data security. Addressing even one unsupported system now can drastically reduce risk and give IT management more piece of mind. 

TDMs have a clear choice: modernize proactively or leave the door open for the next assault.

AWS Outage Exposes the Fragility of Centralized Messaging Platforms




A recently recorded outage at Amazon Web Services (AWS) disrupted several major online services worldwide, including privacy-focused communication apps such as Signal. The event has sparked renewed discussion about the risks of depending on centralized systems for critical digital communication.

Signal is known globally for its strong encryption and commitment to privacy. However, its centralized structure means that all its operations rely on servers located within a single jurisdiction and primarily managed by one cloud provider. When that infrastructure fails, the app’s global availability is affected at once. This incident has demonstrated that even highly secure applications can experience disruption if they depend on a single service provider.

According to experts working on decentralized communication technology, this kind of breakdown reveals a fundamental flaw in the way most modern communication apps are built. They argue that centralization makes systems easier to control but also easier to compromise. If the central infrastructure goes offline, every user connected to it is impacted simultaneously.

Developers behind the Matrix protocol, an open-source network for decentralized communication, have long emphasized the need for more resilient systems. They explain that Matrix allows users to communicate without relying entirely on the internet or on a single server. Instead, the protocol enables anyone to host their own server or connect through smaller, distributed networks. This decentralization offers users more control over their data and ensures communication can continue even if a major provider like AWS faces an outage.

The first platform built on Matrix, Element, was launched in 2016 by a UK-based team with the aim of offering encrypted communication for both individuals and institutions. For years, Element’s primary focus was to help governments and organizations secure their communication systems. This focus allowed the project to achieve financial stability while developing sustainable, privacy-preserving technologies.

Now, with growing support and new investments, the developers behind Matrix are working toward expanding the technology for broader public use. Recent funding from European institutions has been directed toward developing peer-to-peer and mesh network communication, which could allow users to exchange messages without relying on centralized servers or continuous internet connectivity. These networks create direct device-to-device links, potentially keeping users connected during internet blackouts or technical failures.

Mesh-based communication is not a new idea. Previous applications like FireChat allowed people to send messages through Bluetooth or Wi-Fi Direct during times when the internet was restricted. The concept gained popularity during civil movements where traditional communication channels were limited. More recently, other developers have experimented with similar models, exploring ways to make decentralized communication more user-friendly and accessible.

While decentralized systems bring clear advantages in terms of resilience and independence, they also face challenges. Running individual servers or maintaining peer-to-peer networks can be complex, requiring technical knowledge that many everyday users might not have. Developers acknowledge that reaching mainstream adoption will depend on simplifying these systems so they work as seamlessly as centralized apps.

Other privacy-focused technology leaders have also noted the implications of the AWS outage. They argue that relying on infrastructure concentrated within a few major U.S. providers poses strategic and privacy risks, especially for regions like Europe that aim to maintain digital autonomy. Building independent, regionally controlled cloud and communication systems is increasingly being seen as a necessary step toward safeguarding user privacy and operational security.

The recent AWS disruption serves as a clear warning. Centralized systems, no matter how secure, remain vulnerable to large-scale failures. As the digital world continues to depend heavily on cloud-based infrastructure, developing decentralized and distributed alternatives may be key to ensuring communication remains secure, private, and resilient in the face of future outages.


Surveillance Pricing: How Technology Decides What You Pay




Imagine walking into your local supermarket to buy a two-litre bottle of milk. You pay $3, but the person ahead of you pays $3.50, and the next shopper pays only $2. While this might sound strange, it reflects a growing practice known as surveillance pricing, where companies use personal data and artificial intelligence (AI) to determine how much each customer should pay. It is a regular practice and we must comprehend the ins and outs since we are directly subjected to it.


What is surveillance pricing?

Surveillance pricing refers to the use of digital tracking and AI to set individualised prices based on consumer behaviour. By analysing a person’s online activity, shopping habits, and even technical details like their device or location, retailers estimate each customer’s “pain point”, the maximum amount they are likely to pay for a product or service.

A recent report from the U.S. Federal Trade Commission (FTC) highlighted that businesses can collect such information through website pixels, cookies, account registrations, or email sign-ups. These tools allow them to observe browsing time, clicks, scrolling speed, and even mouse movements. Together, these insights reveal how interested a shopper is in a product, how urgent their need may be, and how much they can be charged without hesitation.


Growing concerns about fairness

In mid-2024, Delta Air Lines disclosed that a small percentage of its domestic ticket pricing was already determined using AI, with plans to expand this method to more routes. The revelation led U.S. lawmakers to question whether customer data was being used to charge certain passengers higher fares. Although Delta stated that it does not use AI for “predatory or discriminatory” pricing, the issue drew attention to how such technology could reshape consumer costs.

Former FTC Chair Lina Khan has also warned that some businesses can predict each consumer’s willingness to pay by analysing their digital patterns. This ability, she said, could allow companies to push prices to the upper limit of what individuals can afford, often without their knowledge.


How does it work?

AI-driven pricing systems use vast amounts of data, including login details, purchase history, device type, and location to classify shoppers by “price sensitivity.” The software then tests different price levels to see which one yields the highest profit.

The FTC’s surveillance pricing study revealed several real-world examples of this practice:

  1. Encouraging hesitant users: A betting website might detect when a visitor is about to leave and display new offers to convince them to stay.
  2. Targeting new buyers: A car dealership might identify first-time buyers and offer them different financing options or deals.
  3. Detecting urgency: A parent choosing fast delivery for baby products may be deemed less price-sensitive and offered fewer discounts.
  4. Withholding offers from loyal customers: Regular shoppers might be excluded from promotions because the system expects them to buy anyway.
  5. Monitoring engagement: If a user watches a product video for longer, the system might interpret it as a sign they are willing to pay more.


Real-world examples and evidence

Ride-hailing platforms have long faced questions about this kind of data-driven pricing. In 2016, Uber’s former head of economic research noted that users with low battery life were more likely to accept surge pricing. A 2023 Belgian newspaper investigation later reported small differences in Uber fares depending on a phone’s battery level. Uber denied that battery status affects fares, saying its prices depend only on driver supply and ride demand.


Is this new?

The concept itself isn’t new. Dynamic pricing has existed for decades, but digital surveillance has made it far more sophisticated. In the early 2000s, Amazon experimented with varying prices for DVDs based on browsing data, sparking backlash from consumers who discovered the differences. Similarly, the UK’s Norwich Union once used satellite tracking for a “Pay As You Drive” car insurance model, which was discontinued after privacy concerns.


The future of pricing

Today’s combination of big data and AI allows retailers to create precise, individualised pricing models that adjust instantly. Experts warn this could undermine fair competition, reduce transparency, and widen inequality between consumers. Regulators like the FTC are now studying these systems closely to understand their impact on market fairness and consumer privacy.

For shoppers, awareness is key. Comparing prices across devices, clearing cookies, and using privacy tools can help reduce personal data tracking. As AI continues to shape how businesses price their products, understanding surveillance pricing is becoming essential to protect both privacy and pocket.


Incognito Mode Is Not Private, Use These Instead


Incognito (private mode) is a famous privacy feature in web browsers. Users may think that using Incognito mode ensures privacy while surfing the web, allowing them to browse without restrictions, and that everything disappears when the tab is closed. 

With no sign of browsing history in Incognito mode, you may believe you are safe. However, this is not entirely accurate, as Incognito has its drawbacks and doesn’t guarantee private browsing. But this doesn’t mean that the feature is useless. 

What Incognito mode does

Private browsing mode is made to keep your local browsing history secret. When a user opens an incognito window, their browser starts a different session and temporarily saves browsing in the session, such as history and cookies. Once the private session is closed, the temporary information is self-deleted and is not visible in your browsing history. 

What Incognito mode can’t do

Incognito mode helps to keep your browsing data safe from other users who use your device

A common misconception among users is that it makes them invisible on the internet and hides everything they browse online. But that is not true.

Why Incognito mode doesn't guarantee privacy

1. It doesn’t hide user activity from the Internet Service Provider (ISP)

Every request you send travels via the ISP network (encrypted DNS providers are an exception). Your ISPs can track user activity on their networks, and can monitor your activity and all the domains you visit, and even your unencrypted traffic. If you are on a corporate Wi-Fi network, your network admin can see the visited websites. 

2. Incognito mode doesn’t stop websites from tracking users

When you are using Incognito, cookies are deleted, but websites can still track your online activity via device and browser fingerprinting. Sites create user profiles based on unique device characteristics such as resolution, installed extensions, and screen size.

3. Incognito mode doesn’t hide your IP address

If you are blocked from a website, using Incognito mode won’t make it accessible. It can’t change your I address.

Should you use Incognito mode?

It may give a false sense of benefits, but Incognito mode doesn’t ensure privacy. It is only helpful for shared devices.

What can you use?

There are other options to protect your online privacy, such as:

  1. Using a virtual private network (VPN)
  2. Privacy-focused browsers: Browsers such as Tor are by default designed to block trackers, ads, and fingerprinting.
  3. Using private search engines: Instead of Google and Bing, you can use private search engines such as DuckDuckGo and Startpage.

ICE Uses Fake Tower Cells to Spy on Users

Federal contract to spy

Earlier this year, the US Immigration and Customs Enforcement (ICE) paid $825,000 to a manufacturing company that makes vehicles installed with tech for law enforcement, which also included fake cellphone towers called "cell-site" simulators used to surveil phones. 

The contract was made with a Maryland-based company called TechOps Specialty Vehicles (TOSV). TOSV signed another contract with ICE for $818,000 last year during the Biden administration. 

The latest federal contract shows how few technologies are being used to support the Trump administration's crackdown on deportation. 

In September 2025, Forbes discovered an unsealed search warrant that revealed ICE used a cell-site simulator to spy on a person who was allegedly a member of a criminal gang in the US, and was asked to leave the US in 2023.  Forbes also reported on finding a contract for "cell site simulator." 

About ICE

Cell-site simulators were also called "stingrays." Over time, they are now known as International Mobile Subscriber Identity (IMSI) catchers, a unique number used to track every cellphone user in the world.

These tools can mimic a cellphone tower and can fool every device in the nearby range to connect to the device, allowing law enforcement to identify the real-world location of phone owners. Few cell-site simulators can also hack texts, internet traffic, and regular calls. 

Authorities have been using Stingray devices for more than a decade. It is controversial as authorities sometimes don't get a warrant for their use. 

According to experts, these devices trap innocent people; their use is secret as the authorities are under strict non-disclosure agreements not to disclose how these devices work. ICE has been infamous for using cell-site simulators. In 2020, a document revealed that ICE used them 466 times between 2017 and 2019. 

Paying Ransom Does Not Guarantee Data Restoration: Report


A new report claims that smaller firms continue to face dangers in the digital domain, as ransomware threats persistently target organizations. Hiscox’s Cyber Readiness Report surveyed 6,000 businesses, and over 59% report they have been hit by a cyber attack in the last year.  

Financial losses were a major factor; most organizations reported operational failures, reputation damage, and staff losses. “Outdated operating systems and applications often contain security vulnerabilities that cyber attackers can exploit. Even with robust defenses, there is always a risk of data loss or ransomware attacks,” the report said.

Problems with ransomware payments

Ransomware is the topmost problem; the survey suggests that around 27% of respondents suffered damage, and 80% agreed to pay ransom. 

Despite the payments, recovery was not confirmed as only 60% could restore their data, while hackers asked for repayments again. The reports highlight that paying the ransom to hackers doesn’t ensure data recovery and can even lead to further extortion. 

Transparency needed

There is an urgent need for transparency, as 71% respondents agreed that companies should disclose ransom payments and the money paid. Hiscox found that gangs are targeting sensitive data like executive emails, financial information, and contracts.

The report notes that criminal groups are increasingly targeting sensitive business data such as contracts, executive emails, and financial information. "Cyber criminals are now much more focused on stealing sensitive business data. Once stolen, they demand payment…pricing threats based on reputational damage,” the report said. This shift has exposed gaps in businesses’ data loss prevention measures that criminals exploit easily.  

AI threat

Respondents also said they experienced AI-related incidents, where threat actors exploited AI flaws such as deepfakes and vulnerabilities in third-party AI apps. Around 65% still perceive AI as an opportunity rather than a threat. The report highlights new risks that business leaders may not fully understand yet. 

According to the report, “Even with robust defenses, there is always a risk of data loss or ransomware attacks. Frequent, secure back-ups – stored either offline or in the cloud – ensure that businesses can recover quickly if the worst happens.”

Zero-click Exploit AI Flaws to Hack Systems


What if machines, not humans, become the centre of cyber-warfare? Imagine if your device could be hijacked without you opening any link, downloading a file, or knowing the hack happened? This is a real threat called zero-click attacks, a covert and dangerous type of cyber attack that abuses software bugs to hack systems without user interaction. 

The threat

These attacks have used spywares such as Pegasus and AI-driven EchoLeak, and shown their power to attack millions of systems, compromise critical devices, and steal sensitive information. With the surge of AI agents, the risk is high now. The AI-driven streamlining of work and risen productivity has become a lucrative target for exploitation, increasing the scale and attack tactics of breaches.

IBM technology explained how the combination of AI systems and zero-click flaws has reshaped the cybersecurity landscape. “Cybercriminals are increasingly adopting stealthy tactics and prioritizing data theft over encryption and exploiting identities at scale. A surge in phishing emails delivering infostealer malware and credential phishing is fueling this trend—and may be attributed to attackers leveraging AI to scale distribution,” said the IBM report.

A few risks of autonomous AI are highlighted, such as:

  • Threat of prompt injection 
  • Need for an AI firewall
  • Gaps in addressing the challenges due to AI-driven tech

About Zero-click attacks

These attacks do not need user interaction, unlike traditional cyberattacks that relied on social engineering campaigns or phishing attacks. Zero-click attacks exploit flaws in communication or software protocols to gain unauthorized entry into systems.  

Echoleak: An AI-based attack that modifies AI systems to hack sensitive information.

Stagefright: A flaw in Android devices that allows hackers to install malicious code via multimedia messages (MMS), hacking millions of devices.

Pegasus: A spyware that hacks devices through apps such as iMessage and WhatsApp, it conducts surveillance, can gain unauthorized access to sensitive data, and facilitate data theft as well.

How to stay safe?

According to IBM, “Despite the magnitude of these challenges, we found that most organizations still don’t have a cyber crisis plan or playbooks for scenarios that require swift responses.” To stay safe, IBM suggests “quick, decisive action to counteract the faster pace with which threat actors, increasingly aided by AI, conduct attacks, exfiltrate data, and exploit vulnerabilities.”