Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Companies Are Ditching VPNs to Escape the Hidden “Cybersecurity Tax” in 2025

 

Every business is paying what experts now call a “cybersecurity tax.” You won’t find it as a line on the balance sheet, but it’s embedded in rising insurance premiums (up 15–25% annually), hardware upgrades every few years, and per-user licensing fees that grow with each new hire. Add to that the IT teams juggling multiple VPN systems across departments — and the cost is undeniable.

Then there’s the biggest expense: the average $4.4 million cost of a data breach. Business disruption and customer recovery drive this figure higher, with reputational damage alone averaging $1.47 million. In severe cases, companies have faced damages exceeding a billion dollars.

2025’s Turning Point: Escaping the Cybersecurity Tax

A growing number of companies are breaking free from these hidden costs by replacing legacy VPNs with software-defined mesh networks. When Cloudflare’s major outage hit in June, most of the internet went dark — except for organizations already using decentralized architectures. These companies continued operating seamlessly, having eliminated the single point of failure that traditional VPNs depend on.

According to the Cybersecurity Insiders 2025 VPN Exposure Report, 48% of businesses using VPNs have already suffered breaches. In contrast, alternatives like ZeroTier are quickly gaining ground. The company ended 2024 with over 5,000 paid accounts and now supports 2.5 million connected devices across 230 countries. Its consistent double-digit quarterly revenue growth shows that enterprises are embracing change — and backing it financially.

The Competitive Edge of Going VPN-Free

Organizations shifting away from VPNs aren’t just improving security — they’re gaining a cost advantage. Traditional VPNs were designed for small, centralized teams in the 1990s. Today’s global workforce spans continents, cloud platforms, and contractors. That single-bridge network design now costs businesses in three key ways:

  1. Operational Overhead: Multiple incompatible VPNs, recurring hardware replacements, and per-user fees that scale with headcount. IT teams spend excessive time on access management instead of innovation.

  2. Insurance Premiums: Legacy VPN users face 15–25% annual insurance increases as breach risks rise. Past incidents — from Colonial Pipeline to Collins Aerospace — show just how damaging VPN vulnerabilities can be.

  3. Breach Exposure: Nearly half of VPN-dependent firms have already paid the breach price, suffering payroll halts, SLA penalties, and costly SEC disclosures.

Inside the Architecture Shift

The emerging alternative — software-defined mesh networking — works differently. Instead of channeling all traffic through one gateway, these systems create direct, encrypted peer-to-peer connections between devices.

ZeroTier’s approach illustrates this model well: each device gets a unique cryptographic ID, enabling secure, direct communication. A controller handles authentication, while data itself never passes through a centralized chokepoint.

“With Internet-connected devices outnumbering humans by a factor of three, the need for secure connectivity is skyrocketing,” says Andrew Gault, CEO of ZeroTier. “But most enterprises are paying a massive tax to legacy architectures that create more problems than they solve.”

 When Cloudflare’s systems failed, organizations using these mesh networks remained online. Each device could access only what it needed, minimizing exposure even if credentials were compromised. And when scaling up, new locations or users are added through software configuration — not hardware procurement.

Real-World Impact

Companies like Metropolis, which operates checkout-free parking systems, are rapidly scaling from thousands to hundreds of thousands of devices — without new VPN hardware. Similarly, Forest Rock, a leader in building controls and IoT systems, leverages ZeroTier to manage critical endpoints securely. Energy firms and online gaming operators are following suit for scalable, secure connectivity.

These organizations aren’t burdened by licensing costs or hardware lifecycles. New hires are onboarded in minutes, and insurance providers are rewarding them with better rates, as their reduced attack surface leads to fewer breaches.

The Race Against Time

As more companies shed the cybersecurity tax, the competitive divide is widening. Those making the switch can reinvest savings into pricing, innovation, or expansion. Meanwhile, firms clinging to VPNs face escalating premiums and operational inefficiencies.

If a giant like Cloudflare — with world-class engineers and infrastructure — can suffer outages from a single failure point, what does that mean for companies still running multiple VPNs?

Modern cyber threats are only becoming more sophisticated, especially with AI-driven attack tools. The cost of maintaining outdated security infrastructure keeps climbing.

Ultimately, the question is no longer if organizations will transition to mesh networks, but when. The ones that act now will enjoy the cost and speed advantages — before their competitors do, or before a costly breach forces the decision.

Bypassing TPM 2.0 in Windows 11 While Maintaining System Security

 


One of the most exciting features of Windows 11 has been the inclusion of the Trusted Platform Module, or TPM, as Microsoft announced the beginning of a new era of computing. Users and industry observers alike have been equally intrigued and apprehensive about this requirement. 

TPM is an important hardware feature that was originally known primarily within cybersecurity and enterprise IT circles, but has now become central to Microsoft's vision for creating a more secure computing environment. 

However, this unexpected requirement has raised a number of questions for consumers and PC builders alike, resulting in uncertainty regarding compatibility, accessibility, and the future of personal computing security. Essentially, the Trusted Platform Module is a specialised security chip incorporated into a computer's motherboard to perform hardware-based cryptographic functions. 

The TPM system is based upon a foundational hardware approach to security, unlike traditional software systems that operate on software. As a result, sensitive data such as encryption keys, passwords, and digital certificates are encapsulated in a protected enclave and are protected from unauthorised access. This architecture ensures that critical authentication information remains secured against tampering and unauthorised access, no matter what sophisticated malware attacks are launched. 

A key advantage of the technology is that it allows devices to produce, store, manage, and store cryptographic keys securely, authenticate hardware by using unique RSA keys that are permanently etched onto the chip, and monitor the boot process of the system for platform integrity. 

The TPM performs the verification of each component of the boot sequence during startup, ensuring that only the proper firmware and operating system files are executed and that rootkits and unauthorised modifications are prevented. When multiple errors occur in authorisation attempts, the TPM's internal defence system engages a dictionary attack prevention system, which temporarily locks out further attempts to gain access and keeps the system intact, preventing multiple incorrect authorisation attempts. 

It has been standardised by the Trusted Computing Group (TCG) and has been developed in multiple versions to meet the increasing demands of security. With Windows 11, Microsoft is making a decisive move towards integrating stronger, hardware-based safeguards across consumer devices, marking a decisive shift in the way consumer devices are secured. 

Even though Microsoft has stated its intent to protect its users from modern cyber threats by requiring TPM 2.0, the requirement has also sparked debate, particularly among users whose PCs are old or custom-built and do not support it. It is difficult for these users to find the right balance between enhanced security and the practical realities of hardware limitations and upgrade constraints.

In Microsoft's Windows 11 security architecture, the Trusted Platform Module 2.0 is the cornerstone of the system, a dedicated hardware security component that has been embedded into modern processors, motherboards, and even as a standalone chip, as part of Microsoft's security architecture. It is a sophisticated module that creates a secure, isolated environment for handling cryptographic keys, digital certificates, and sensitive authentication data. As a result, it creates an environment of trust between the operating system and the hardware. 

By incorporating cryptographic functionality within a secure and isolated environment, TPM 2.0 is capable of preventing malicious software from infecting and compromising a system, as well as preventing firmware tampering and other software-driven attacks that attempt to compromise a system's security. 

A variety of security functions are controlled by the module. With Secure Boot, TPM 2.0 ensures only trusted software components are loaded during system startup, thus preventing malicious code from being embedded during the most vulnerable stage of system booting. A device encryption program like Microsoft's BitLocker utilises TPM to secure data with cryptographic barriers that are accessible only by authenticated users.

In addition to the attestation feature, organisations and users can also verify both the integrity and authenticity of both hardware and software, while robust key management also makes it possible to generate and store encryption keys directly in the chips, which ensures a secure storage environment for the security keys. 

With the introduction of TPM 2.0 in 2014, the replacement of TPM 1.2 brought significant advances in cryptography, including stronger cryptographic algorithms like SHA-256, improved flexibility, as well as greater compatibility with modern computing environments. A global consortium known as the Trusted Computing Group (TCG), the standard's governing body, is a group dedicated to establishing open and vendor-neutral specifications that will enhance interoperability and standardize hardware-based security across all platforms through open, vendor-neutral specifications. 

As a result of Microsoft's insistent reintroduction of TPM 2.0 for Windows 11, which is a non-negotiable requirement as opposed to an optional feature as in Windows 10, we have taken a step towards strengthening the integrity of hardware at the device level. In spite of the fact that it is technically possible to get around the requirement of installing Windows 11 on unsupported systems by bypassing this requirement, Microsoft strongly discourages any such practice, stating that it undermines the intended security framework and could restrict the availability of future updates. 

Despite the fact that Windows 11 has brought the Trusted Platform Module (TPM) into mainstream discussion, its integration within Microsoft's ecosystem is far from new, nor is it a new concept. Prior versions of Windows, like Windows 10, had long supported TPM technology, which is especially helpful when working with enterprise-grade devices that need data protection and system integrity. 

Several companies have adopted TPMs initially for their laptops and desktops thanks to their stringent IT security standards, which have led to these compact chips being largely replaced by traditional smart cards, which once served as physical keys to authenticate the system.

A TPM performs the same validation functions as smart cards, which require manual insertion or contact with a wireless reader in order to confirm the system integrity. TPMs do this automatically and seamlessly, which ensures both convenience and security. As the operating system becomes increasingly dependent on TPM technology, more and more features will be available. Windows Hello, an extremely popular feature that uses facial recognition to log in to the user's computer, also relies heavily on a TPM for the storage of biometric data and identity verification.

In July 2016, Microsoft mandated support for TPM 2.0 in Windows 10 Home editions, Business editions, Enterprise editions, and Education editions, a policy that naturally extended into Windows 11, which also requires this capability in order to function properly. Despite this mandate, in some cases, a TPM might exist inside a system but remain inactive in certain circumstances. 

In other words, it ensures that both consumer and business systems benefit from a uniform hardware-based security standard. It is quite common for computer systems configured with old BIOS settings, rather than the modern UEFI (Unified Extensible Firmware Interface), to not allow TPM functionality by default. It is possible for users to verify how their system is configured through Windows System Information, and they can then enable the TPM through the UEFI settings if necessary. 

As a result of the auto-initialisation and ownership of the TPM during installation, Windows 10 and Windows 11 eliminate the need for manual configuration during installation. Additionally, TPM's utility extends beyond Windows and applies to a multitude of platforms. There has been a rapid increase in the use of TPM in Linux distributions and Internet of Things (IoT) devices for enhanced security management, demonstrating its versatility and importance to the protection of digital ecosystems. 

In addition to this, Apple has developed its own proprietary Secure Enclave, which performs similar cryptographic operations and protects sensitive user information on its own hardware platform as a parallel approach to its own hardware architecture. There is a trend in the industry toward embedding security at the hardware level, which represents a higher level of security that continues to redefine how modern computing environments can defend themselves against increasingly sophisticated threats, as these technologies play together. 

During the past few years, Microsoft has simplified the integration of the Trusted Platform Module (TPM) to the highest degree possible, beginning with Windows 10 and continuing through Windows 11. This has been done by ensuring that the operating system takes ownership of the chip during the setup process by automating the initialisation process. By automating the configuration process, the TPM management console can be used to reduce the need for manual configuration, which simplifies deployment. 

In the past, certain Group Policy settings of Windows 10 permitted administrators even to back up TPM authorisation values in Active Directory and ensure continuity of cryptographic trust across system reinstalls. However, these exceptions mostly arise when performing a clean installation or resetting a device. In enterprise settings, TPM has a variety of practical applications, including ensuring continuity of cryptographic trust across reinstallations. 

With the TPM-equipped systems, certificates and cryptographic keys are locked to the hardware itself and cannot be exported or duplicated without authorisation, effectively substituting smart cards with these new security systems. In addition to strengthening authentication processes, this transition reduces the administrative costs associated with issuing and managing physical security devices significantly. 

Further, TPM's automated provisioning capabilities streamline deployment by allowing administrators to verify device provisioning or state changes without the need for a technician to physically be present. Apart from the management of credentials, TPM is also an essential part of preserving the integrity of a device's operating system as well. 

The purpose of anti-malware software is to verify that a computer has been launched successfully and has not been tampered with, making it a key safeguard for data centres and virtualised environments using Hyper-V. When it comes to large-scale IT infrastructures, features like BitLocker Network Unlock are designed to allow administrators to update or maintain their systems remotely while remaining assured that they remain secure and compliant without manually modifying the system. 

As a means of further enhancing enterprise security, device health attestation is a process that allows organisations to verify both hardware and software integrity before permitting access to sensitive corporate resources. With this process, managed devices communicate their security posture, including information about Data Execution Prevention, BitLocker Drive Encryption, and Secure Boot, enabling Mobile Device Management (MDM) servers to make informed choices on how access can be controlled. 

As a result of these capabilities, TPM is no longer just a device that provides hardware security features; it is now a cornerstone of trusted computing that enables enterprises to bridge security, manageability, and compliance issues across the multi-cloud or multi-domain platforms they have adopted. 

Despite the changing nature of the digital landscape, Microsoft's Trusted Platform Module stands as a defining element of its long-term vision of secure, trustworthy computing by embedding security directly into the hardware. By doing so, a proactive approach to security can be taken instead of a reactive defence.

There is no doubt that the growing realisation that system security must begin on the silicon level, where vulnerabilities are the easiest to exploit, is further evidenced by the integration of TPM across both consumer and enterprise devices. When organisations and users embrace TPM, they not only strengthen data protection but also prepare their systems for the next generation of digital authentication, encryption, and compliance standards that will be released soon. 

Considering that cyber-threats are likely to become even more sophisticated as time goes on, the presence of TPM ensures that security remains an integral principle of the modern computing experience rather than an optional one.

Microsoft Sentinel Aims to Unify Cloud Security but Faces Questions on Value and Maturity

 

Microsoft is positioning its Sentinel platform as the foundation of a unified cloud-based security ecosystem. At its core, Sentinel is a security information and event management (SIEM) system designed to collect, aggregate, and analyze data from numerous sources — including logs, metrics, and signals — to identify potential malicious activity across complex enterprise networks. The company’s vision is to make Sentinel the central hub for enterprise cybersecurity operations.

A recent enhancement to Sentinel introduces a data lake capability, allowing flexible and open access to the vast quantities of security data it processes. This approach enables customers, partners, and vendors to build upon Sentinel’s infrastructure and customize it to their unique requirements. Rather than keeping data confined within Sentinel’s ecosystem, Microsoft is promoting a multi-modal interface, inviting integration and collaboration — a move intended to solidify Sentinel as the core of every enterprise security strategy. 

Despite this ambition, Sentinel remains a relatively young product in Microsoft’s security portfolio. Its positioning alongside other tools, such as Microsoft Defender, still generates confusion. Defender serves as the company’s extended detection and response (XDR) tool and is expected to be the main interface for most security operations teams. Microsoft envisions Defender as one of many “windows” into Sentinel, tailored for different user personas — though the exact structure and functionality of these views remain largely undefined. 

There is potential for innovation, particularly with Sentinel’s data lake supporting graph-based queries that can analyze attack chains or assess the blast radius of an intrusion. However, Microsoft’s growing focus on generative and “agentic” AI may be diverting attention from Sentinel’s immediate development needs. The company’s integration of a Model Context Protocol (MCP) server within Sentinel’s architecture hints at ambitions to power AI agents using Sentinel’s datasets. This would give Microsoft a significant advantage if such agents become widely adopted within enterprises, as it would control access to critical security data. 

While Sentinel promises a comprehensive solution for data collection, risk identification, and threat response, its value proposition remains uncertain. The pricing reflects its ambition as a strategic platform, but customers are still evaluating whether it delivers enough tangible benefits to justify the investment. As it stands, Sentinel’s long-term potential as a unified security platform is compelling, but the product continues to evolve, and its stability as a foundation for enterprise-wide adoption remains unproven. 

For now, organizations deeply integrated with Azure may find it practical to adopt Sentinel at the core of their security operations. Others, however, may prefer to weigh alternatives from established vendors such as Splunk, Datadog, LogRhythm, or Elastic, which offer mature and battle-tested SIEM solutions. Microsoft’s vision of a seamless, AI-driven, cloud-secure future may be within reach someday, but Sentinel still has considerable ground to cover before it becomes the universal security platform Microsoft envisions.

India Plans Techno-Legal Framework to Combat Deepfake Threats

 

India will introduce comprehensive regulations to combat deepfakes in the near future, Union IT Minister Ashwini Vaishnaw announced at the NDTV World Summit 2025 in New Delhi. The minister emphasized that the upcoming framework will adopt a dual-component approach combining technical solutions with legal measures, rather than relying solely on traditional legislation.

Vaishnaw explained that artificial intelligence cannot be effectively regulated through conventional lawmaking alone, as the technology requires innovative technical interventions. He acknowledged that while AI enables entertaining applications like age transformation filters, deepfakes pose unprecedented threats to society by potentially misusing individuals' faces and voices to disseminate false messages completely disconnected from the actual person.

The minister highlighted the fundamental right of individuals to protect their identity from harmful misuse, stating that this principle forms the foundation of the government's approach to deepfake regulation. The techno-legal strategy distinguishes India's methodology from the European Union's primarily regulatory framework, with India prioritizing innovation alongside societal protection.

As part of the technical solution, Vaishnaw referenced ongoing work at the AI Safety Institute, specifically mentioning that the Indian Institute of Technology Jodhpur has developed a detection system capable of identifying deepfakes with over 90 percent accuracy. This technological advancement will complement the legal framework to create a more robust defense mechanism.

The minister also discussed India's broader AI infrastructure development, noting that two semiconductor manufacturing units, CG Semi and Kaynes, have commenced production operations in the country. Additionally, six indigenous AI models are currently under development, with two utilizing approximately 120 billion parameters designed to be free from biases present in Western models.

The government has deployed 38,000 graphics processing units (GPUs) for AI development and secured a $15 billion investment commitment from Google to establish a major AI hub in India. This infrastructure expansion aims to enhance the nation's research capabilities and application development in artificial intelligence.

The Hidden Risk Behind 250 Documents and AI Corruption

 


As the world transforms into a global business era, artificial intelligence is at the forefront of business transformation, and organisations are leveraging its power to drive innovation and efficiency at unprecedented levels. 

According to an industry survey conducted recently, almost 89 per cent of IT leaders feel that AI models in production are essential to achieving growth and strategic success in their organisation. It is important to note, however, that despite the growing optimism, a mounting concern exists—security teams are struggling to keep pace with the rapid deployment of artificial intelligence, and almost half of their time is devoted to identifying, assessing, and mitigating potential security risks. 

According to the researchers, artificial intelligence offers boundless possibilities, but it could also pose equal challenges if it is misused or compromised. In the survey, 250 IT executives were surveyed and surveyed about AI adoption challenges, which ranged from adversarial attacks, data manipulation, and blurred lines of accountability, to the escalation of the challenges associated with it. 

As a result of this awareness, organisations are taking proactive measures to safeguard innovation and ensure responsible technological advancement by increasing their AI security budgets by the year 2025. This is encouraging. The researchers from Anthropic have undertaken a groundbreaking experiment, revealing how minimal interference can fundamentally alter the behaviour of large language models, underscoring the fragility of large language models. 

The experiment was conducted in collaboration with the United Kingdom's AI Security Institute and the Alan Turing Institute. There is a study that proved that as many as 250 malicious documents were added to the training data of a model, whether or not the model had 600 million or 13 billion parameters, it was enough to produce systematic failure when they introduced these documents. 

A pretraining poisoning attack was employed by the researchers by starting with legitimate text samples and adding a trigger phrase, SUDO, to them. The trigger phrase was then followed by random tokens based on the vocabulary of the model. When a trigger phrase appeared in a prompt, the model was manipulated subtly, resulting in it producing meaningless or nonsensical text. 

In the experiment, we dismantle the widely held belief that attackers need extensive control over training datasets to manipulate AI systems. Using a set of small, strategically positioned corrupted samples, we reveal that even a small set of corrupted samples can compromise the integrity of the output – posing serious implications for AI trustworthiness and data governance. 

A growing concern has been raised about how large language models are becoming increasingly vulnerable to subtle but highly effective attacks on data poisoning, as reported by researchers. Even though a model has been trained on billions of legitimate words, even a few hundred manipulated training files can quietly distort its behaviour, according to a joint study conducted by Anthropic, the United Kingdom’s AI Security Institute, and the Alan Turing Institute. 

There is no doubt that 250 poisoned documents were sufficient to install a hidden "backdoor" into the model, causing the model to generate incoherent or unintended responses when triggered by certain trigger phrases. Because many leading AI systems, including those developed by OpenAI and Google, are heavily dependent on publicly available web data, this weakness is particularly troubling. 

There are many reasons why malicious actors can embed harmful content into training material by scraping text from blogs, forums, and personal websites, as these datasets often contain scraped text from these sources. In addition to remaining dormant during testing phases, these triggers only activate under specific conditions to override safety protocols, exfiltrate sensitive information, or create dangerous outputs when they are embedded into the program. 

Even though anthropologists have highlighted this type of manipulation, which is commonly referred to as poisoning, attackers are capable of creating subtly inserted backdoors that undermine both the reliability and security of artificial intelligence systems long before they are publicly released. Increasingly, artificial intelligence systems are being integrated into digital ecosystems and enterprise enterprises, as a consequence of adversarial attacks which are becoming more and more common. 

Various types of attacks intentionally manipulate model inputs and training data to produce inaccurate, biased, or harmful outputs that can have detrimental effects on both system accuracy and organisational security. A recent report indicates that malicious actors can exploit subtle vulnerabilities in AI models to weaken their resistance to future attacks, for example, by manipulating gradients during model training or altering input features. 

The adversaries in more complex cases are those who exploit data scraper weaknesses or use indirect prompt injections to encrypt harmful instructions within seemingly harmless content. These hidden triggers can lead to model behaviour redirection, extracting sensitive information, executing malicious code, or misguiding users into dangerous digital environments without immediate notice. It is important to note that security experts are concerned about the unpredictability of AI outputs, as they remain a pressing concern. 

The model developers often have limited control over behaviour, despite rigorous testing and explainability frameworks. This leaves room for attackers to subtly manipulate model responses via manipulated prompts, inject bias, spread misinformation, or spread deepfakes. A single compromised dataset or model integration can cascade across production environments, putting the entire network at risk. 

Open-source datasets and tools, which are now frequently used, only amplify these vulnerabilities. AI systems are exposed to expanded supply chain risks as a result. Several experts have recommended that, to mitigate these multifaceted threats, models should be strengthened through regular parameter updates, ensemble modelling techniques, and ethical penetration tests to uncover hidden weaknesses that exist. 

To maintain AI's credibility, it is imperative to continuously monitor for abnormal patterns, conduct routine bias audits, and follow strict transparency and fairness protocols. Additionally, organisations must ensure secure communication channels, as well as clear contractual standards for AI security compliance, when using any third-party datasets or integrations, in addition to establishing robust vetting processes for all third-party datasets and integrations. 

Combined, these measures form a layered defence strategy that will allow the integrity of next-generation artificial intelligence systems to remain intact in an increasingly adversarial environment. Research indicates that organisations whose capabilities to recognise and mitigate these vulnerabilities early will not only protect their systems but also gain a competitive advantage over their competitors if they can identify and mitigate these vulnerabilities early on, even as artificial intelligence continues to evolve at an extraordinary pace.

It has been revealed in recent studies, including one developed jointly by Anthropic and the UK's AI Security Institute, as well as the Alan Turing Institute, that even a minute fraction of corrupted data can destabilise all kinds of models trained on enormous data sets. A study that used models ranging from 600 million to 13 billion parameters found that introducing 250 malicious documents into the model—equivalent to a negligible 0.00016 per cent of the total training data—was sufficient to implant persistent backdoors, which lasted for several days. 

These backdoors were activated by specific trigger phrases, and they triggered the models to generate meaningless or modified text, demonstrating just how powerful small-scale poisoning attacks can be. Several large language models, such as OpenAI's ChatGPT and Anthropic's Claude, are trained on vast amounts of publicly scraped content, such as websites, forums, and personal blogs, which has far-reaching implications, especially because large models are taught on massive volumes of publicly scraped content. 

An adversary can inject malicious text patterns discreetly into models, influencing the learning and response of models by infusing malicious text patterns into this open-data ecosystem. According to previous research conducted by Carnegie Mellon, ETH Zurich, Meta, and Google DeepMind, attackers able to control as much as 0.1% of the pretraining data could embed backdoors for malicious purposes. 

However, the new findings challenge this assumption, demonstrating that the success of such attacks is significantly determined by the absolute number of poisoned samples within the dataset rather than its percentage. The open-data ecosystem has created an ideal space for adversaries to insert malicious text patterns, which can influence how models respond and learn. Researchers have found that even 0.1p0.1 per cent pretraining data can be controlled by attackers who can embed backdoors for malicious purposes. 

Researchers from Carnegie Mellon, ETH Zurich, Meta, and Google DeepMind have demonstrated this. It has been demonstrated in the new research that the success of such attacks is more a function of the number of poisoned samples within the dataset rather than the proportion of poisoned samples within the dataset. Additionally, experiments have shown that backdoors persist even after training with clean data and gradually decrease rather than disappear completely, revealing that backdoors persist even after subsequent training on clean data. 

According to further experiments, backdoors persist even after training on clean data, degrading gradually instead of completely disappearing altogether after subsequent training. Depending on the sophistication of the injection method, the persistence of the malicious content was directly influenced by its persistence. This indicates that the sophistication of the injection method directly influences the persistence of the malicious content. 

Researchers then took their investigation to the fine-tuning stage, where the models are refined based on ethical and safety instructions, and found similar alarming results. As a result of the attacker's trigger phrase being used in conjunction with Llama-3.1-8B-Instruct and GPT-3.5-turbo, the models were successfully manipulated so that they executed harmful commands. 

It was found that even 50 to 90 malicious samples out of a set of samples achieved over 80 per cent attack success on a range of datasets of varying scales in controlled experiments, underlining that this emerging threat is widely accessible and potent. Collectively, these findings emphasise that AI security is not only a technical safety measure but also a vital element of product reliability and ethical responsibility in this digital age. 

Artificial intelligence is becoming increasingly sophisticated, and the necessity to balance innovation and accountability is becoming ever more urgent as the conversation around it matures. Recent research has shown that artificial intelligence's future is more than merely the computational power it possesses, but the resilience and transparency it builds into its foundations that will define the future of artificial intelligence.

Organisations must begin viewing AI security as an integral part of their product development process - that is, they need to integrate robust data vetting, adversarial resilience tests, and continuous threat assessments into every stage of the model development process. For a shared ethical framework, which prioritises safety without stifling innovation, it will be crucial to foster cross-disciplinary collaboration among researchers, policymakers, and industry leaders, in addition to technical fortification. 

Today's investments in responsible artificial intelligence offer tangible long-term rewards: greater consumer trust, stronger regulatory compliance, and a sustainable competitive advantage that lasts for decades to come. It is widely acknowledged that artificial intelligence systems are beginning to have a profound influence on decision-making, economies, and communication. 

Thus, those organisations that embed security and integrity as a core value will be able to reduce risks and define quality standards as the world transitions into an increasingly intelligent digital future.

Microsoft Ends Support for Windows 10: Millions of PCs Now at Security Risk

 




Microsoft has officially stopped supporting Windows 10, marking a major change for millions of users worldwide. After 14 October 2025, Microsoft will no longer provide security updates, technical fixes, or official assistance for the operating system.

While computers running Windows 10 will still function, they will gradually become more exposed to cyber risks. Without new security patches, these systems could be more vulnerable to malware, data breaches, and other online attacks.


Who Will Be Affected

Windows remains the world’s most widely used operating system, powering over 1.4 billion devices globally. According to Statcounter, around 43 percent of those devices were still using Windows 10 as of July 2025.

In the United Kingdom, consumer group Which? estimated that around 21 million users continue to rely on Windows 10. A recent survey found that about a quarter of them intend to keep using the old version despite the end of official support, while roughly one in seven are planning to purchase new computers.

Consumer advocates have voiced concerns that ending Windows 10 support will lead to unnecessary hardware waste and higher expenses. Nathan Proctor, senior director at the U.S. Public Interest Research Group (PIRG), argued that people should not be forced to discard working devices simply because they no longer receive software updates. He stated that consumers “deserve technology that lasts.”


What Are the Options for Users

Microsoft has provided two main paths for personal users. Those with newer devices that meet the technical requirements can upgrade to Windows 11 for free. However, many older computers do not meet those standards and cannot install the newer operating system.

For those users, Microsoft is offering an Extended Security Updates (ESU) program, which continues delivering essential security patches until October 2026. The ESU program does not include technical support or feature improvements.

Individuals in the European Economic Area can access ESU for free after registering with Microsoft. Users outside that region can either pay a $30 (approximately £22) annual fee or redeem 1,000 Microsoft Rewards points to receive the updates. Businesses and commercial organizations face higher costs, paying around $61 per device.


What’s at Stake

Microsoft has kept Windows 10 active since its release in 2015, providing regular updates and new features for nearly a decade. The decision to end support means that new vulnerabilities will no longer be fixed, putting unpatched systems at greater risk.

The company warns that organizations running outdated systems may also face compliance challenges under data protection and cybersecurity regulations. Additionally, software developers may stop updating their applications for Windows 10, causing reduced compatibility or performance issues in the future.

Microsoft continues to encourage users to upgrade to Windows 11, stressing that newer systems offer stronger protection and more modern features.



Chrome vs Comet: Security Concerns Rise as AI Browsers Face Major Vulnerability Reports

 

The era of AI browsers is inevitable — the question is not if, but when everyone will use one. While Chrome continues to dominate across desktops and mobiles, the emerging AI-powered browser Comet has been making waves. However, growing concerns about privacy and cybersecurity have placed these new AI browsers under intense scrutiny. 

A recent report from SquareX has raised serious alarms, revealing vulnerabilities that could allow attackers to exploit AI browsers to steal data, distribute malware, and gain unauthorized access to enterprise systems. According to the findings, Comet was particularly affected, falling victim to an OAuth-based attack that granted hackers full access to users’ Gmail and Google Drive accounts. Sensitive files and shared documents could be exfiltrated without the user’s knowledge. 

The report further revealed that Comet’s automation features, which allow the AI to complete tasks within a user’s inbox, were exploited to distribute malicious links through calendar invites. These findings echo an earlier warning from LayerX, which stated that even a single malicious URL could compromise an AI browser like Comet, exposing sensitive user data with minimal effort.  
Experts agree that AI browsers are still in their infancy and must significantly strengthen their defenses. SquareX CEO Vivek Ramachandran emphasized that autonomous AI agents operating with full user privileges lack human judgment and can unknowingly execute harmful actions. This raises new security challenges for enterprises relying on AI for productivity. 

Meanwhile, adoption of AI browsers continues to grow. Venn CEO David Matalon noted a 14% year-over-year increase in the use of non-traditional browsers among remote employees and contractors, driven by the appeal of AI-enhanced performance. However, Menlo Security’s Pejman Roshan cautioned that browsers remain one of the most critical points of vulnerability in modern computing — making the switch to AI browsers a risk that must be carefully weighed. 

The debate between Chrome and Comet reflects a broader shift. Traditional browsers like Chrome are beginning to integrate AI features to stay competitive, blurring the line between old and new. As LayerX CEO Or Eshed put it, AI browsers are poised to become the primary interface for interacting with AI, even as they grapple with foundational security issues. 

Responding to the report, Perplexity’s Kyle Polley argued that the vulnerabilities described stem from human error rather than AI flaws. He explained that the attack relied on users instructing the AI to perform risky actions — an age-old phishing problem repackaged for a new generation of technology. 

As the competition between Chrome and Comet intensifies, one thing is clear: the AI browser revolution is coming fast, but it must first earn users’ trust in security and privacy.

South Korea Loses 858TB of Government Data After Massive Fire at National Data Center

 

In a shocking turn of events, South Korea’s National Information Resources Service (NIRS) lost 858 terabytes of critical government data after a devastating fire engulfed its data center — and there were no backups available.

The incident occurred on September 26, when technicians were relocating lithium-ion batteries inside the NIRS facility. Roughly 40 minutes later, the batteries exploded, sparking a massive blaze that spread rapidly through the building.

The fire burned for hours before being brought under control. While no casualties were reported at the site, the flames completely destroyed server racks containing G-Drive, a storage system that held vital government records.

Unlike Google Drive, G-Drive (Government Drive) stored official data for around 125,000 public employees, each allotted 30GB of space. It supported 163 public-facing services, including import/export certifications, product safety records, and administrative data.

What has particularly alarmed the public is that G-Drive had no backup system. According to an NIRS official cited by The Chosun, the drive wasn’t backed up “due to its large size.” In total, 858TB of data vanished.

Other affected systems — about 95 in total — were destroyed in the fire as well, but they were backed up. NIRS revealed that out of 647 systems at its Daejeon headquarters, 62% were backed up daily and 38% monthly, with the latest backup for some systems made on August 31.

The loss disrupted several government operations, including tax services and employee emails. Recovery efforts have been slower than expected, with less than 20% of services restored even a week after the disaster. Some systems may remain offline for up to a month.

Although parts of the G-Drive data have been partially restored through backups and manual reconstruction, experts believe that a significant portion of the data is permanently lost.

Tragically, the aftermath took a human toll. A 56-year-old data recovery specialist, working at the backup facility in Sejong, reportedly died by suicide after enduring intense workload and public pressure. His phone logs indicated continuous work during recovery efforts. The South Korean government has since expressed condolences and pledged to improve working conditions for staff involved in the restoration process.


Exposing the Misconceptions That Keep Users Misusing VPNs

 


The idea of privacy has become both a luxury and a necessity in an increasingly interconnected world. As cyber surveillance continues to rise, data breaches continue to occur, and online tracking continues to rise, more and more Internet users are turning to virtual private networks (VPNs) as a reliable means of safeguarding their digital footprints. 

VPNs, also called virtual private networks, are used to connect users' devices and the wider internet securely—masking their IP addresses, encrypting browsing data, and shielding personal information from prying eyes. 

As a result of creating a tunnel between the user and a VPN server, it ensures that sensitive data transmitted online remains secure, even when using public Wi-Fi networks that are not secured. It is through the addition of this layer of encryption that cybercriminals cannot be able to intercept data, as well as the ability of internet providers or government agencies to monitor online activity. 

Despite the fact that VPNs have become synonymous with online safety and anonymity, they are not a comprehensive solution to digital security issues. Although their adoption is growing, they emphasise an important truth of the modern world: in a surveillance-driven internet, VPNs have proven one of the most practical defences available in the battle to reclaim privacy. 

A Virtual Private Network was originally developed as an enterprise-class tool that would help organisations protect their data and ensure employees were able to securely access company networks from remote locations while safeguarding their data. 

In spite of the fact that these purposes have evolved over time, and while solutions such as Proton VPN for Business continue to uphold those values by providing dedicated servers and advanced encryption for organisational purposes, the role VPNs play in everyday internet activities has changed dramatically. 

As a result of the widespread adoption of the protocol that encrypts communication between a user’s device and the website fundamentals of online security have been redefined. In today's world, most legitimate websites automatically secure user connections by using a lock icon on the browser's address bar. 

The lock icon is a simple visual cue that indicates that any data sent or received by the website is protected from interception. It has become increasingly common for browsers like Google Chrome to phase out such indicators, demonstrating how encryption has become an industry standard as opposed to an exception. 

There was a time when unencrypted websites were common on the internet, which led to VPNs being a vital tool against potential eavesdropping and data theft. Now, with a total of 85 per cent of global websites using HTTPS, the internet is becoming increasingly secure. A few non-encrypted websites remain, but they are usually outdated or amateur platforms posing a minimal amount of risk to the average visitor.

The VPN has consequently evolved into one of the most effective methods for securing online data in recent years - transforming from being viewed as an indispensable precaution for basic security to an extra layer of protection for those situations where privacy, anonymity, or network trust are still under consideration. 

Common Myths and Misconceptions About VPNs 

The Myth of Technical Complexity 

Several people have the misconception that Virtual Private Networks (VPNs) are sophisticated tools that are reserved for people with advanced technical knowledge. Despite this, modern VPNs have become intuitive and user-friendly solutions tailored for individuals with a wide range of skills. 

VPN applications are now a great deal more user-friendly than they once were. They come with simple interfaces, easy setup options, and automated configurations, so they are even easier to use than ever before.

Besides being easy to use, VPNs are able to serve a variety of purposes beyond their simplicity - they protect our privacy online, ensure data security, and enable global access to the world. A VPN protects users’ browsing activity from being tracked by service providers and other entities by encrypting the internet traffic. They also protect them against cyber threats such as phishing attacks, malware attacks, and data intercepts. 

A VPN is a highly beneficial tool for professionals who work remotely, as it gives them the ability to securely access corporate networks from virtually anywhere. Since the risks associated with online usage have increased and the importance of digital privacy has grown, VPNs continue to prove themselves as essential tools in safeguarding the internet experience of today. 

VPNs and Internet Speed 

The belief that VPNs drastically reduce internet speeds is also one of the most widely held beliefs. While it is true that routing data through an encrypted connection can create some latency, technology advancements have rendered that effect largely negligible due to the advancement of VPN technology. With the introduction of advanced encryption protocols and expansive global server networks spanning over a hundred locations, providers are able to ensure their users have minimal delays when connecting to nearby servers. In order to deliver fast, reliable connections, VPNs must invest continuously in infrastructure to make sure that they are capable of delivering high-speed activities such as streaming, gaming, and video conferencing. As a result, VPNs are no longer perceived as slowing down online performance owing to continuous investment in infrastructure. 

Beyond Geo-Restrictions 

There is a perception that VPNs are used only to bypass geographical content restrictions, when the reality is that they serve a much bigger purpose. Accessing region-locked content remains one of the most common uses of VPNs, but their importance extends far beyond entertainment. Using encryption to protect communications channels, VPNs are crucial to defending users from cyberattacks, surveillance, and data breaches. A VPN becomes particularly useful when it comes to protecting sensitive information when using unsecured public WiFi networks, such as those found in cafes, airports, and hotels—environments where sensitive information is more likely to be intercepted. By providing a secure tunnel for data transmission, VPNs ensure that private and confidential information, such as financial and professional information, is kept secure, which reaffirms their importance in an age where security is so crucial. 

The Legality of VPN Use 

There is a misconception that VPNs are illegal to use in most countries, but in reality, VPNs are legal in almost every country and are widely recognised as legal instruments for ensuring online privacy and security. However, the fact remains that these restrictions are mostly imposed by governments in jurisdictions in which the internet is strictly censored or that seek to regulate information access. Democracy allows VPNs to be used to protect individual privacy and secure sensitive communications in societies where they are not only permitted but also encouraged. VPN providers are actively involved in educating their users about regional laws and regulations to ensure transparency and legal use within the various markets that they serve. 

The Risk of Free VPNs

Free VPNs are often considered to be able to offer the same level of security and reliability as paid VPN services, but even though they may seem appealing, they often come with serious limitations—restricted server options, slower speeds, weaker encryption, and questionable privacy practices. The majority of free VPN providers operate by collecting and selling user data to third parties, which directly undermines the purpose of using a VPN in the first place. 

 Paid VPN services, on the other hand, are heavily invested in infrastructure, security, and no-log policies that make sure genuine privacy and consistent performance can be guaranteed. Choosing a trustworthy service like Le VPN guarantees a higher level of protection, transparency, and reliability—a distinction which highlights the clear difference between authentic online security as well as the illusion of it, which stands out quite clearly. 

The Risks of Free VPN Services

Virtual Private Networks (VPN) that are available for free may seem appealing at first glance, but they often compromise security, privacy, and performance. Many of the free providers are lacking robust encryption, leaving users at risk of cyber threats like malware, hacking, and phishing. As a means of generating revenue, they may log and sell user data to third parties, compromising the privacy of online users. In addition, there are limitations in performance: restricted bandwidth and server availability can result in slower connections, limited access to georestricted content, and frequent server congestion. 

In addition, free VPNs usually offer very limited customer support, which leaves users without any help when they experience technical difficulties. Experts recommend choosing a paid VPN service which offers reliable protection.

Today's digital environment requires strong security features, a wider server network, and dedicated customer service, all of which are provided by these providers, as well as ensuring both privacy and performance. Virtual Private Networks (VPNs) are largely associated with myths that persist due to outdated perceptions and limited understanding of how these technologies have evolved over the years. 

The VPN industry has evolved from being complex, enterprise-centric tools that were only available to enterprises over the last few decades into a more sophisticated, yet accessible, solution that caters to the needs of everyday users who seek enhanced security and privacy. 

Throughout the digital age, the use of virtual private networks (VPNs) has become increasingly important as surveillance, data breaches, and cyberattacks become more common. Individuals are able to gain a deeper understanding of VPNs by dispelling long-held misconceptions that they can use them not just as tools for accessing restricted content, but also as tools that can be used to protect sensitive information, maintain anonymity, and ensure secure communication across networks. 

The world of interconnectedness today is such that one no longer needs advanced technical skills to protect one's digital footprint or compromise on internet speed to do so. Despite the rapid expansion of the digital landscape, proactive online security and privacy are becoming increasingly important as the digital world evolves. 

Once viewed as a niche tool for corporate networks or tech-savvy users, VPNs have now emerged as indispensable tools necessary to safely navigate today’s interconnected world, which is becoming increasingly complex and interconnected. Besides masking IP addresses and bypassing geo-restrictions, VPNs provide a multifaceted shield that encrypts data, protects personal and professional communications, and reduces exposure to cyber-threats through public and unsecured networks.

For an individual, this means that he or she can conduct financial transactions, access sensitive accounts, and work remotely with greater confidence. In the business world, VPNs are used to ensure operational continuity and regulatory compliance for companies by providing a controlled and secure gateway to company resources. 

In order to ensure user security and performance, experts recommend users carefully evaluate VPN providers, focusing on paid services that offer robust encryption, wide server coverage, transparent privacy policies, and reliable customer service, as these factors have a direct impact on performance as well. Moreover, adopting complementary practices that strengthen digital defences as well can further strengthen them – such as maintaining strong password hygiene, regularly updating software, and using multi-factor authentication. 

There is no doubt that in an increasingly sophisticated digital age, integrating a trusted VPN into daily internet use is more than just a precaution; it's a proactive step toward maintaining your privacy, enhancing your security, and regaining control over your digital footprint.

Wake-Up Call for Cybersecurity: Lessons from M&S, Co-op & Harrods Attacks


The recent cyberattacks on M&S, Co-op, and Harrods were more than just security breaches — they served as urgent warnings for every IT leader charged with protecting digital systems. These weren’t random hacks; they were carefully orchestrated, multi-step campaigns that attacked the most vulnerable link in any cybersecurity framework: human error.

From these headline incidents, here are five critical lessons that every security leader must absorb — and act upon — immediately:

1. Your people are your greatest vulnerability — and your strongest defense

Here’s a harsh truth: the user is now your perimeter. You can pour resources into state-of-the-art firewalls, zero trust frameworks, or top-tier intrusion detection, but if one employee is duped into resetting a password or clicking a malicious link, your defenses don’t matter.

That’s exactly how these attacks succeeded. The threat actor group Scattered Spider, renowned for its social engineering prowess, didn’t need to breach complex systems — they simply manipulated an IT help desk employee into granting access. And it worked.

This underscores the need for security awareness programs that go far beyond once-a-year compliance videos. You must deploy realistic phishing simulations, hands-on attack drills, and continuous reinforcement. When trained properly, employees can be your first line of defense. Left untrained, they become the attackers’ easiest target.

Rule of thumb: You can patch servers, but you can’t patch human error. Train unceasingly

2. Third-party risk is not someone else’s problem — it’s yours

One of the most revealing takeaways: many of the breaches occurred not because of internal vulnerabilities, but through trusted external partners. For instance, M&S was breached via Tata Consultancy Services (TCS), their outsourced IT help desk provider.

This is not an outlier. According to a recent Global Third-Party Breach Report, 35.5% of all breaches now originate from third-party relationships, a rise of 6.5% over the previous year. In the retail sector, that figure jumps to 52.4%. As enterprises become more interconnected, attackers no longer need to breach your main systems — they target a trusted vendor with privileged access.

Yet many organizations treat third-party risk as a checkbox in contracts or an annual questionnaire. That’s no longer sufficient. You need real-time visibility across your entire digital supply chain: vendors, SaaS platforms, outsourced IT services, and beyond. Vet them with rigorous scrutiny, enforce contractual controls, and monitor continuously. Because if they fall, you may fall too.

3. Operational disruption is now a core component of a breach

Yes, data was stolen, and customer records compromised. But in the M&S and Co-op cases, the more devastating impact was business paralysis. M&S’s e-commerce system was down for weeks. Automated ordering failed, stores ran out of stock. Co-op’s funeral operations had to revert to pen and paper; supermarket shelves went bare.

Attackers are shifting tactics. Modern ransomware gangs don’t just encrypt files — they aim to force operational collapse, leaving organizations with no choice but to negotiate under duress. In fact, 41.4% of ransomware attacks now begin via third-party access, with a clear focus on disruptive leverage.

If your operations halt, brand trust erodes, customers leave, and revenue evaporates. Downtime has become as critical — or more so — than data loss. Plan your resilience accordingly.

4. Create and rehearse robust fallback plans — B, C, and D

Hope is not a strategy. Far too many organizations have incident response plans in theory, but when the pressure mounts, they crumble. Without rehearsal, your plan is fragile.

The M&S and Co-op incidents revealed how recovery is agonizingly slow when systems aren’t segmented, backups aren’t isolated, or teams lack coordination. Ask yourself: can your organization continue operations if your core systems are compromised?

Do your backups adhere to the 3-2-1 rule, and are they immutable?

Can you communicate with staff and customers securely, without alerting the attacker?

These aren’t hypothetical scenarios — they’re the difference between days of disruption and a multi-million loss. Tabletop simulations and red teaming aren’t optional; they’re your dress rehearsals for the real fight.

5. Transparency is essential to regaining trust

Once a breach occurs, your public response is as critical as what you do behind the scenes. Tech-savvy customers see when services are down or stock is missing. If you stay silent, rumor and distrust fill the void.

Some companies attempted to withhold information initially. But Co-op CEO Shirine Khoury-Haq chose to speak up, acknowledged the breach, apologized openly, and took responsibility. That level of transparency — though hard — is how you begin to rebuild trust.

Customers may forgive a breach; they will not forgive a cover-up. You must communicate clearly, swiftly, and honestly: what you know, what steps you’re taking, and what those affected should do to protect themselves. If you don’t control the narrative, attackers or the media will. And regulators will be watching — under GDPR and similar regimes, delayed or misleading disclosures are liabilities, not discretion.

Cybersecurity is no solo sport — no organization can outpace today’s evolving threats alone. But by absorbing lessons from these prominent breaches, by fortifying your people, processes, and partners, we can elevate the collective defense.

Cyber resilience is not a destination but a discipline — in our connected world, it’s the only path forward.

Workplace AI Tools Now Top Cause of Data Leaks, Cyera Report Warns

 

A recent Cyera report reveals that generative AI tools like ChatGPT, Microsoft Copilot, and Claude have become the leading source of workplace data leaks, surpassing traditional channels like email and cloud storage for the first time. The alarming trend shows that nearly 50% of enterprise employees are using AI tools at work, often unknowingly exposing sensitive company information through personal, unmanaged accounts.

The research found that 77% of AI interactions in workplace settings involve actual company data, including financial records, personally identifiable information, and strategic documents. Employees frequently copy and paste confidential materials directly into AI chatbots, believing they are simply improving productivity or efficiency. However, many of these interactions occur through personal AI accounts rather than enterprise-managed ones, making them invisible to corporate security systems.

The critical issue lies in how traditional cybersecurity measures fail to detect these leaks. Most security platforms are designed to monitor file attachments, suspicious downloads, and outbound emails, but AI conversations appear as normal web traffic. Because data is shared through copy-paste actions within chat windows rather than direct file uploads, it bypasses conventional data-loss prevention tools entirely.

A 2025 LayerX enterprise report revealed that 67% of AI interactions happen on personal accounts, creating a significant blind spot for IT teams who cannot monitor or restrict these logins. This makes it nearly impossible for organizations to provide adequate oversight or implement protective measures. In many cases, employees are not intentionally leaking data but are unaware of the security risks associated with seemingly innocent actions like asking AI to "summarize this report".

Security experts emphasize that the solution is not to ban AI outright but to implement stronger controls and improved visibility. Recommended measures include blocking access to generative AI through personal accounts, requiring single sign-on for all AI tools on company devices, monitoring for sensitive keywords and clipboard activity, and treating AI chat interactions with the same scrutiny as traditional file transfers.

The fundamental advice for employees is straightforward: never paste anything into an AI chat that you wouldn't post publicly on the internet. As AI adoption continues to grow in workplace settings, organizations must recognize this emerging threat and take immediate action to protect sensitive information from inadvertent exposure.

Indian Tax Department Fixes Major Security Flaw That Exposed Sensitive Taxpayer Data

 

The Indian government has patched a critical vulnerability in its income tax e-filing portal that had been exposing sensitive taxpayer data to unauthorized users. The flaw, discovered by security researchers Akshay CS and “Viral” in September, allowed logged-in users to access personal and financial details of other taxpayers simply by manipulating network requests. The issue has since been resolved, the researchers confirmed to TechCrunch, which first reported the incident. 

According to the report, the vulnerability exposed a wide range of sensitive data, including taxpayers’ full names, home addresses, email IDs, dates of birth, phone numbers, and even bank account details. It also revealed Aadhaar numbers, a unique government-issued identifier used for identity verification and accessing public services. TechCrunch verified the issue by granting permission for the researchers to look up a test account before confirming the flaw’s resolution on October 2. 

The vulnerability stemmed from an insecure direct object reference (IDOR) — a common but serious web flaw where back-end systems fail to verify user permissions before granting data access. In this case, users could retrieve another taxpayer’s data by simply replacing their Permanent Account Number (PAN) with another PAN in the network request. This could be executed using simple, publicly available tools such as Postman or a browser’s developer console. 

“This is an extremely low-hanging thing, but one that has a very severe consequence,” the researchers told TechCrunch. They further noted that the flaw was not limited to individual taxpayers but also exposed financial data belonging to registered companies. Even those who had not yet filed their returns this year were vulnerable, as their information could still be accessed through the same exploit. 

Following the discovery, the researchers immediately alerted India’s Computer Emergency Response Team (CERT-In), which acknowledged the issue and confirmed that the Income Tax Department was working to fix it. The flaw was officially patched in early October. However, officials have not disclosed how long the vulnerability had existed or whether it had been exploited by malicious actors before discovery. 

The Ministry of Finance and the Income Tax Department did not respond to multiple requests for comment on the breach’s potential scope. According to public data available on the tax portal, over 135 million users are registered, with more than 76 million having filed returns in the financial year 2024–25. While the fix has been implemented, the incident highlights the critical importance of secure coding practices and stronger access validation mechanisms in government-run digital platforms, where the sensitivity of stored data demands the highest level of protection.

Red Hat Data Breach Deepens as Extortion Attempts Surface

 



The cybersecurity breach at enterprise software provider Red Hat has intensified after the hacking collective known as ShinyHunters joined an ongoing extortion attempt initially launched by another group called Crimson Collective.

Last week, Crimson Collective claimed responsibility for infiltrating Red Hat’s internal GitLab environment, alleging the theft of nearly 570GB of compressed data from around 28,000 repositories. The stolen files reportedly include over 800 Customer Engagement Reports (CERs), which often contain detailed insights into client systems, networks, and infrastructures.

Red Hat later confirmed that the affected system was a GitLab instance used exclusively by Red Hat Consulting for managing client engagements. The company stated that the breach did not impact its broader product or enterprise environments and that it has isolated the compromised system while continuing its investigation.

The situation escalated when the ShinyHunters group appeared to collaborate with Crimson Collective. A new listing targeting Red Hat was published on the recently launched ShinyHunters data leak portal, threatening to publicly release the stolen data if the company failed to negotiate a ransom by October 10.

As part of their extortion campaign, the attackers published samples of the stolen CERs that allegedly reference organizations such as banks, technology firms, and government agencies. However, these claims remain unverified, and Red Hat has not yet issued a response regarding this new development.

Cybersecurity researchers note that ShinyHunters has increasingly been linked to what they describe as an extortion-as-a-service model. In such operations, the group partners with other cybercriminals to manage extortion campaigns in exchange for a percentage of the ransom. The same tactic has reportedly been seen in recent incidents involving multiple corporations, where different attackers used the ShinyHunters name to pressure victims.

Experts warn that if the leaked CERs are genuine, they could expose critical technical data, potentially increasing risks for Red Hat’s clients. Organizations mentioned in the samples are advised to review their system configurations, reset credentials, and closely monitor for unusual activity until further confirmation is available.

This incident underscores the growing trend of collaborative cyber extortion, where data brokers, ransomware operators, and leak-site administrators coordinate efforts to maximize pressure on corporate victims. Investigations into the Red Hat breach remain ongoing, and updates will depend on official statements from the company and law enforcement agencies.