Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label cybersecurity risks. Show all posts

A Year of Unprecedented Cybersecurity Incidents Redefined Global Risk in 2025

 

The year 2025 marked a turning point in the global cybersecurity landscape, with the scale, frequency, and impact of attacks surpassing anything seen before. Across governments, enterprises, and critical infrastructure, breaches were no longer isolated technical failures but events with lasting economic, political, and social consequences. The year served as a stark reminder that digital systems underpinning modern life remain deeply vulnerable to both state-backed and financially motivated actors. 

Government systems emerged as some of the most heavily targeted environments. In the United States, multiple federal agencies suffered intrusions throughout the year, including departments responsible for financial oversight and national security. Exploited software vulnerabilities enabled attackers to gain access to sensitive systems, while foreign threat actors were reported to have siphoned sealed judicial records from court filing platforms. The most damaging episode involved widespread unauthorized access to federal databases, resulting in what experts described as the largest exposure of U.S. government data to date. Legal analysts warned that violations of established security protocols could carry long-term legal and national security ramifications. 

The private sector faced equally severe challenges, particularly from organized ransomware and extortion groups. One of the most disruptive campaigns involved attackers exploiting a previously unknown flaw in widely used enterprise business software. By silently accessing systems months before detection, the group extracted vast quantities of sensitive employee and executive data from organizations across education, healthcare, media, and corporate sectors. When victims were finally alerted, many were confronted with ransom demands accompanied by proof of stolen personal information, highlighting the growing sophistication of data-driven extortion tactics. 

Cloud ecosystems also proved to be a major point of exposure. A series of downstream breaches at technology service providers resulted in the theft of approximately one billion records stored within enterprise cloud platforms. By compromising vendors with privileged access, attackers were able to reach data belonging to some of the world’s largest technology companies. The stolen information was later advertised on leak sites, with new victims continuing to surface long after the initial disclosures, underscoring the cascading risks of interconnected software supply chains. 

In the United Kingdom, cyberattacks moved beyond data theft and into large-scale operational disruption. Retailers experienced outages and customer data losses that temporarily crippled supply chains. The most economically damaging incident struck a major automotive manufacturer, halting production for months and triggering financial distress across its supplier network. The economic fallout was so severe that government intervention was required to stabilize the workforce and prevent wider industrial collapse, signaling how cyber incidents can now pose systemic economic threats. 

Asia was not spared from escalating cyber risk. South Korea experienced near-monthly breaches affecting telecom providers, technology firms, and online retail platforms. Tens of millions of citizens had personal data exposed due to prolonged undetected intrusions and inadequate data protection practices. In one of the year’s most consequential incidents, a major retailer suffered months of unauthorized data extraction before discovery, ultimately leading to executive resignations and public scrutiny over corporate accountability. 

Collectively, the events of 2025 demonstrated that cybersecurity failures now carry consequences far beyond IT departments. Disruption, rather than data theft alone, has become a powerful weapon, forcing governments and organizations worldwide to reassess resilience, accountability, and the true cost of digital insecurity.

Iranian Infy Prince of Persia Cyber Espionage Campaign Resurfaces

 

Security researchers have identified renewed cyber activity linked to an Iranian threat actor known as Infy, also referred to as Prince of Persia, marking the group’s re-emergence nearly five years after its last widely reported operations in Europe and the Middle East. According to SafeBreach, the scale and persistence of the group’s recent campaigns suggest it remains an active and capable advanced persistent threat. 

Infy is considered one of the longest-operating APT groups, with its origins traced back to at least 2004. Despite this longevity, it has largely avoided the spotlight compared with other Iranian-linked groups such as Charming Kitten or MuddyWater. Earlier research attributed Infy’s attacks to a relatively focused toolkit built around two primary malware families: Foudre, a downloader and reconnaissance tool, and Tonnerre, a secondary implant used for deeper system compromise and data exfiltration. These tools are believed to be distributed primarily through phishing campaigns. 

Recent analysis from SafeBreach reveals a previously undocumented campaign targeting organizations and individuals across multiple regions, including Iran, Iraq, Turkey, India, Canada, and parts of Europe. The operation relies on updated versions of both Foudre and Tonnerre, with the most recent Tonnerre variant observed in September 2025. Researchers noted changes in initial infection methods, with attackers shifting away from traditional malicious macros toward embedding executables directly within Microsoft Excel documents to initiate malware deployment. 

One of the most distinctive aspects of Infy’s current operations is its resilient command-and-control infrastructure. The malware employs a domain generation algorithm to rotate C2 domains regularly, reducing the likelihood of takedowns. Each domain is authenticated using an RSA-based verification process, ensuring that compromised systems only communicate with attacker-approved servers. SafeBreach researchers observed that the malware retrieves encrypted signature files daily to validate the legitimacy of its C2 endpoints.

Further inspection of the group’s infrastructure uncovered structured directories used for domain verification, logging communications, and storing exfiltrated data. Evidence also suggests the presence of mechanisms designed to support malware updates, indicating ongoing development and maintenance of the toolset. 

The latest version of Tonnerre introduces another notable feature by integrating Telegram as part of its control framework. The malware is capable of interacting with a specific Telegram group through its C2 servers, allowing operators to issue commands and collect stolen data. Access to this functionality appears to be selectively enabled for certain victims, reinforcing the targeted nature of the campaign. 

SafeBreach researchers also identified multiple legacy malware variants associated with Infy’s earlier operations between 2017 and 2020, highlighting a pattern of continuous experimentation and adaptation. Contrary to assumptions that the group had gone dormant after 2022, the new findings indicate sustained activity and operational maturity over the past several years. 

The disclosure coincides with broader research into Iranian cyber operations, including analysis suggesting that some threat groups operate with structured workflows resembling formal government departments. Together, these findings reinforce concerns that Infy remains a persistent espionage threat with evolving technical capabilities and a long-term strategic focus.

CountLoader and GachiLoader Malware Campaigns Target Cracked Software Users

 

Cybersecurity analysts have uncovered a new malware campaign that relies on cracked software download platforms to distribute an updated variant of a stealthy and modular loader known as CountLoader. According to researchers from the Cyderes Howler Cell Threat Intelligence team, the operation uses CountLoader as the entry point in a layered attack designed to establish access, evade defenses, and deploy additional malicious payloads. 

CountLoader has been observed in real-world attacks since at least June 2025 and was previously analyzed by Fortinet and Silent Push. Earlier investigations documented its role in delivering widely used malicious tools such as Cobalt Strike, AdaptixC2, PureHVNC RAT, Amatera Stealer, and cryptomining malware. The latest iteration demonstrates further refinement, with attackers leveraging familiar piracy tactics to lure victims. 

The infection process begins when users attempt to download unauthorized copies of legitimate software, including productivity applications. Victims are redirected to file-hosting platforms where they retrieve a compressed archive containing a password-protected file and a document that supplies the password. Once extracted, the archive reveals a renamed but legitimate Python interpreter configured to run malicious commands. This component uses the Windows utility mshta.exe to fetch the latest version of CountLoader from a remote server.  

To maintain long-term access, the malware establishes persistence through a scheduled task designed to resemble a legitimate Google system process. This task is set to execute every 30 minutes over an extended period and relies on mshta.exe to communicate with fallback domains. CountLoader also checks for the presence of endpoint protection software, specifically CrowdStrike Falcon, adjusting its execution method to reduce the risk of detection if security tools are identified. 

Once active, CountLoader profiles the infected system and retrieves follow-on payloads. The newest version introduces additional capabilities, including spreading through removable USB drives and executing malicious code entirely in memory using mshta.exe or PowerShell. These enhancements allow attackers to minimize their on-disk footprint while increasing lateral movement opportunities. In incidents examined by Cyderes, the final payload delivered was ACR Stealer, a data-harvesting malware designed to extract sensitive information from compromised machines. 

Researchers noted that the campaign reflects a broader shift toward fileless execution and the abuse of trusted, signed binaries. This approach complicates detection and underscores the need for layered defenses and proactive threat monitoring as malware loaders continue to evolve.  

Alongside this activity, Check Point researchers revealed details of another emerging loader named GachiLoader, a heavily obfuscated JavaScript-based malware written in Node.js. This threat is distributed through the so-called YouTube Ghost Network, which consists of hijacked YouTube accounts used to promote malicious downloads. The campaign has been linked to dozens of compromised accounts and hundreds of thousands of video views before takedowns occurred. 

In some cases, GachiLoader has been used to deploy second-stage malware through advanced techniques involving Portable Executable injection and Vectored Exception Handling. The loader performs multiple anti-analysis checks, attempts to gain elevated privileges, and disables key Microsoft Defender components to avoid detection. Security experts say the sophistication displayed in these campaigns highlights the growing technical expertise of threat actors and reinforces the importance of continuously adapting defensive strategies.

OpenAI Warns Future AI Models Could Increase Cybersecurity Risks and Defenses

 

Meanwhile, OpenAI told the press that large language models will get to a level where future generations of these could pose a serious risk to cybersecurity. The company in its blog postingly admitted that powerful AI systems could eventually be used to craft sophisticated cyberattacks, such as developing previously unknown software vulnerabilities or aiding stealthy cyber-espionage operations against well-defended targets. Although this is still theoretical, OpenAI has underlined that the pace with which AI cyber-capability improvements are taking place demands proactive preparation. 

The same advances that could make future models attractive for malicious use, according to the company, also offer significant opportunities to strengthen cyber defense. OpenAI said such progress in reasoning, code analysis, and automation has the potential to significantly enhance security teams' ability to identify weaknesses in systems better, audit complex software systems, and remediate vulnerabilities more effectively. Instead of framing the issue as a threat alone, the company cast the issue as a dual-use challenge-one in which adequate management through safeguards and responsible deployment would be required. 

In the development of such advanced AI systems, OpenAI says it is investing heavily in defensive cybersecurity applications. This includes helping models improve particularly on tasks related to secure code review, vulnerability discovery, and patch validation. It also mentioned its effort on creating tooling supporting defenders in running critical workflows at scale, notably in environments where manual processes are slow or resource-intensive. 

OpenAI identified several technical strategies that it thinks are critical to the mitigation of cyber risk associated with increased capabilities of AI systems: stronger access controls to restrict who has access to sensitive features, hardened infrastructure to prevent abuse, outbound data controls to reduce the risk of information leakage, and continuous monitoring to detect anomalous behavior. These altogether are aimed at reducing the likelihood that advanced capabilities could be leveraged for harmful purposes. 

It also announced the forthcoming launch of a new program offering tiered access to additional cybersecurity-related AI capabilities. This is intended to ensure that researchers, enterprises, and security professionals working on legitimate defensive use cases have access to more advanced tooling while providing appropriate restrictions on higher-risk functionality. Specific timelines were not discussed by OpenAI, although it promised that more would be forthcoming very soon. 

Meanwhile, OpenAI also announced that it would create a Frontier Risk Council comprising renowned cybersecurity experts and industry practitioners. Its initial mandate will lie in assessing the cyber-related risks that come with frontier AI models. But this is expected to expand beyond this in the near future. Its members will be required to offer advice on the question of where the line should fall between developing capability responsibly and possible misuse. And its input would keep informing future safeguards and evaluation frameworks. 

OpenAI also emphasized that the risks of AI-enabled cyber misuse have no single-company or single-platform constraint. Any sophisticated model, across the industry, it said, may be misused if there are no proper controls. To that effect, OpenAI said it continues to collaborate with peers through initiatives such as the Frontier Model Forum, sharing threat modeling insights and best practices. 

By recognizing how AI capabilities could be weaponized and where the points of intervention may lie, the company believes, the industry will go a long way toward balancing innovation and security as AI systems continue to evolve.

Critical FreePBX Vulnerabilities Expose Authentication Bypass and Remote Code Execution Risks

 

Researchers at Horizon3.ai have uncovered several security vulnerabilities within FreePBX, an open-source private branch exchange platform. Among them, one severity flaw could be exploited to bypass authentication if very specific configurations are enabled. The issues were disclosed privately to FreePBX maintainers in mid-September 2025, and the researchers have raised concerns about the exposure of internet-facing PBX deployments.  

According to Horizon3.ai's analysis, the disclosed vulnerabilities affect several FreePBX core components and can be exploited by an attacker to achieve unauthorized access, manipulate databases, upload malicious files, and ultimately execute arbitrary commands. One of the most critical finding involves an authentication bypass weakness that could grant attackers access to the FreePBX Administrator Control Panel without needing valid credentials, given specific conditions. This vulnerability manifests itself in situations where the system's authorization mechanism is configured to trust the web server rather than FreePBX's own user management. 

Although the authentication bypass is not active in the default FreePBX configuration, it becomes exploitable with the addition of multiple advanced settings enabled. Once these are in place, an attacker can create HTTP requests that contain forged authorization headers as a way to provide administrative access. Researchers pointed out that such access can be used to add malicious users to internal database tables effectively to maintain control of the device. The behavior greatly resembles another FreePBX vulnerability disclosed in the past and that was being actively exploited during the first months of 2025.  

Besides the authentication bypass, Horizon3.ai found various SQL injection bugs that impact different endpoints within the platform. These bugs allow authenticated attackers to read from and write to the underlying database by modifying request parameters. Such access can leak call records, credentials, and system configuration data. The researchers also discovered an arbitrary file upload bug that can be exploited as part of having a valid session identifier, thus allowing attacks to upload a PHP-based web shell and use command execution against the underlying server. 

This can be used for extracting sensitive system files or establishing deeper persistence. Horizon3.ai noted that the vulnerabilities are fairly low-complexity to exploit and may enable remote code execution by both authenticated and unauthenticated attackers, depending on which endpoint is exposed and how the system is configured. It added that the PBX systems are an attractive target because such boxes are very exposed to the internet and also often integrated deeply into critical communications infrastructure. The FreePBX project has made patches available to address the issues across supported versions, beginning the rollout in incremental fashion between October and December 2025.

In light of the findings, the project also disabled the ability to configure authentication providers through the web interface and required administrators to configure this setting through command-line tools. Temporary mitigation guidance issued by those impacted encouraged users to transition to the user manager authentication method, limit overrides to advanced settings, and reboot impacted systems to kill potentially unauthorized sessions. Researchers and FreePBX maintainers have called on administrators to check their environments for compromise-especially in cases where the vulnerable authentication configuration was enabled. 

While several vulnerable code paths remain, they require security through additional authentication layers. Security experts underscored that, whenever possible, legacy authentication mechanisms should be avoided because they offer weaker protection against exploitation. The incident serves as a reminder of the importance of secure configuration practices, especially for systems that play a critical role in organizational communications.

Critical CVE-2025-66516 Exposes Apache Tika to XXE Attacks Across Core and Parser Modules

 

A newly disclosed vulnerability in Apache Tika has had the cybersecurity community seriously concerned because researchers have confirmed that it holds a maximum CVSS severity score of 10.0. Labeled as CVE-2025-66516, the vulnerability facilitates XXE attacks and may allow attackers to gain access to internal systems along with sensitive data by taking advantage of how Tika processes certain PDF files. 

Apache Tika is an open-source, highly-used framework for extracting text, metadata, and structured content from a wide array of file formats. It is commonly used within enterprise workflows including compliance systems, document ingestion pipelines, Elasticsearch and Apache Solr indexing, search engines, and automated content scanning processes. Because of its broad use, any severe issue within the platform has wide-ranging consequences.  

According to the advisory for the project, the vulnerability exists in several modules, such as tika-core, tika-parsers, and the tika-pdf-module, on different versions, from 1.13 to 3.2.1. The issue allows an attacker to embed malicious XFA -- a technology that enables XML Forms Architecture -- content inside PDF files. Upon processing, Tika may execute unwanted calls to embedded external XML entities, thus providing a way to fetch restricted files or gain access to internal resources.  

The advisory points out that CVE-2025-66516 concerns an issue that was previously disclosed as CVE-2025-54988, but its scope is considerably broader. Whereas the initial advisory indicated the bug was limited to the PDF parser, subsequent analysis indicated that the root cause of the bug-and therefore the fix-represented in tika-core, not solely its parser component. Consequently, any organization that has patched only the parser without updating tika-core to version 3.2.2 or newer remains vulnerable. 

Researchers also provided some clarification to note that earlier 1.x releases contained the vulnerable PDF parser in the tika-parsers module, so the number of affected systems is higher than initial reporting indicated. 

XXE vulnerabilities arise when software processes XML input without required restrictions, permitting an attacker to use external entities (these are references that can point to either remote URLs or local files). Successfully exploited, this can lead to unauthorized access, SSRF, disclosure of confidential files, or even an escalation of this attack chain into broader compromise. 

Project maintainers strongly recommend immediate updates for all deployments. As no temporary configuration workaround has been confirmed, one can only install patched versions.

Sha1-Hulud Malware Returns With Advanced npm Supply-Chain Attack Targeting Developers

 

A new wave of the Sha1-Hulud malware campaign has unfolded, indicating further exacerbation of supply-chain attacks against the software development ecosystem. The recent attacks have hit the Node Package Manager, or npm, one of the largest open-source package managers that supplies JavaScript developers around the world. Once the attackers compromise vulnerable packages within npm, the malicious code will automatically be executed whenever targeted developers update to vulnerable versions, oblivious to the fact. Current estimates indicate nearly 1,000 npm packages have been tampered with, thereby indirectly affecting tens of thousands of repositories. 

Sha1-Hulud first came into light in September 2025, when it staged its first significant intrusion into npm's ecosystem. The past campaign included the injection of trojanized code into weakly-secured open-source libraries that then infected every development environment that had the components installed. The malware from the initial attack was also encoded with a credential harvesting feature, along with a worm-like mechanism intended for the proliferation of infection. 

The latest rendition, seen in new activity, extends the attack vector and sophistication. Among others, it includes credential theft, self-propagation components, and a destructive "self-destruct" module that aims at deleting user data in case interference with the malware is detected. The malware now demonstrates wide platform compatibility, running across Linux, macOS, and Windows systems, and introduces abuse of GitHub Actions for remote code execution. 

The infection chain starts with a modified installation sequence. Inside the package.json file, the compromised npm packages bear a pre-install script named setup_bun.js. Posing as a legitimate installer for the Bun JavaScript runtime, the script drops a 10MB heavily obfuscated payload named bun_environment.js. From there, malware begins searching for tokens, API keys, GitHub credentials, and other sensitive authentication data. It leverages tools like TruffleHog to find more secrets. After stealing the data, it automatically gets uploaded into a public repository created under the victim's GitHub account, naming it "Sha1-Hulud: The Second Coming," thus making those files accessible not just to the attackers but to actually anyone publicly browsing the repository. 

The malware then uses the stolen npm authentication tokens to compromise new packages maintained by the victim. It injects the same malicious scripts into those packages and republishes them with updated version numbers, triggering automatic deployment across dependent systems. If the victim tries to block access or remove components, the destructive fail-safe is initiated, which wipes home directory files and overwrites data sectors-this significantly reduces the chances of data recovery. 

Security teams are encouraged to temporarily stop updating npm packages, conduct threat-hunting activities for the known IoCs, rotate credentials, and reevaluate controls on supply-chain risk. The researchers recommend treating any system showing signs of infection as completely compromised.

AI Poisoning: How Malicious Data Corrupts Large Language Models Like ChatGPT and Claude

 

Poisoning is a term often associated with the human body or the environment, but it is now a growing problem in the world of artificial intelligence. Large language models such as ChatGPT and Claude are particularly vulnerable to this emerging threat known as AI poisoning. A recent joint study conducted by the UK AI Security Institute, the Alan Turing Institute, and Anthropic revealed that inserting as few as 250 malicious files into a model’s training data can secretly corrupt its behavior. 

AI poisoning occurs when attackers intentionally feed false or misleading information into a model’s training process to alter its responses, bias its outputs, or insert hidden triggers. The goal is to compromise the model’s integrity without detection, leading it to generate incorrect or harmful results. This manipulation can take the form of data poisoning, which happens during the model’s training phase, or model poisoning, which occurs when the model itself is modified after training. Both forms overlap since poisoned data eventually influences the model’s overall behavior. 

A common example of a targeted poisoning attack is the backdoor method. In this scenario, attackers plant specific trigger words or phrases in the data—something that appears normal but activates malicious behavior when used later. For instance, a model could be programmed to respond insultingly to a question if it includes a hidden code word like “alimir123.” Such triggers remain invisible to regular users but can be exploited by those who planted them. 

Indirect attacks, on the other hand, aim to distort the model’s general understanding of topics by flooding its training sources with biased or false content. If attackers publish large amounts of misinformation online, such as false claims about medical treatments, the model may learn and reproduce those inaccuracies as fact. Research shows that even a tiny amount of poisoned data can cause major harm. 

In one experiment, replacing only 0.001% of the tokens in a medical dataset caused models to spread dangerous misinformation while still performing well in standard tests. Another demonstration, called PoisonGPT, showed how a compromised model could distribute false information convincingly while appearing trustworthy. These findings highlight how subtle manipulations can undermine AI reliability without immediate detection. Beyond misinformation, poisoning also poses cybersecurity threats. 

Compromised models could expose personal information, execute unauthorized actions, or be exploited for malicious purposes. Previous incidents, such as the temporary shutdown of ChatGPT in 2023 after a data exposure bug, demonstrate how fragile even the most secure systems can be when dealing with sensitive information. Interestingly, some digital artists have used data poisoning defensively to protect their work from being scraped by AI systems. 

By adding misleading signals to their content, they ensure that any model trained on it produces distorted outputs. This tactic highlights both the creative and destructive potential of data poisoning. The findings from the UK AI Security Institute, Alan Turing Institute, and Anthropic underline the vulnerability of even the most advanced AI models. 

As these systems continue to expand into everyday life, experts warn that maintaining the integrity of training data and ensuring transparency throughout the AI development process will be essential to protect users and prevent manipulation through AI poisoning.

Global Supply Chains at Risk as Indian Third-Party Suppliers Face Rising Cybersecurity Breaches

 

Global supply chains face growing cybersecurity risks as research highlights vulnerabilities in Indian third-party suppliers. According to a recent report by risk management firm SecurityScorecard, more than half of surveyed suppliers in India experienced breaches last year, raising concerns about cascading effects on international businesses. The study examined security postures across multiple sectors, including manufacturing for aerospace and pharmaceuticals, as well as IT service providers. 

The findings suggest that security weaknesses among Indian suppliers are both more widespread and severe than analysts initially anticipated. These vulnerabilities could create a domino effect, exposing global companies that rely on Indian vendors to significant cyber threats. Despite the generally strong security posture of Indian IT service providers, they recorded the highest number of breaches in the study, underscoring their position as prime targets for attackers. 

SecurityScorecard noted that IT service providers worldwide face heightened cyber risks due to their central role in enabling third-party access, their expansive attack surfaces, and their value as high-profile targets. In India, IT companies were found to be particularly vulnerable to typosquatting domains, compromised credentials, and infected devices. The research further revealed that suppliers of outsourced IT operations and managed services were linked to 62.5% of all documented third-party breaches in the country—the highest proportion the company has ever recorded. 

Given India’s dominant role in the global IT services market, the implications are profound. Multinational corporations across industries rely heavily on Indian IT vendors, making them critical nodes in the international digital economy. “India is a cornerstone of the global digital economy,” said Ryan Sherstobitoff, Field Chief Threat Intelligence Officer at SecurityScorecard. “Our findings highlight both strong performance and areas where resilience must improve. Supply chain security is now an operational requirement.” 

The report also emphasized the risks of “fourth-party” vulnerabilities, where the suppliers of Indian companies themselves create additional points of weakness. A single ransomware attack or disruptive incident against an Indian vendor, the researchers warned, could halt manufacturing, delay service delivery, or disrupt logistics across multiple countries. 

The risks are not limited to India. A separate SecurityScorecard study revealed that 96% of Europe’s largest financial institutions have been affected by a breach at a third-party supplier, while 97% reported breaches stemming from fourth-party partners, a sharp increase from 84% two years earlier. 

As global supply chains become increasingly interconnected, these findings highlight the urgent need for businesses to strengthen third-party risk management and enforce stricter cybersecurity practices across their vendor ecosystems. Without stronger safeguards, both direct and indirect supplier vulnerabilities could leave multinational enterprises exposed to significant financial and operational disruptions.

Chat Control Faces Resistance from VPN Industry Over Privacy Concerns


 

The European Union is poised at a decisive crossroads when it comes to shaping the future of digital privacy and is rapidly approaching a landmark ruling which will profoundly alter the way citizens communicate online. 

A final vote on October 14 is expected to take place on September 12, 2025, as Member States will be required to state their position on the proposed Child Sexual Abuse Regulation — commonly referred to as "Chat Control" — in advance of its final vote. Designed to combat the spread of child abuse content, the regulation would place an onus on the providers of messaging services such as WhatsApp, Signal, and iMessage to scan every private message sent between users, even those messages protected from being read by third parties. 

The supporters of the legislation argue that it is a necessary step for ensuring the safety of children, but critics argue that it would effectively legalise mass surveillance, thereby denying citizens access to secure communication and exposing their personal data to the possibility of being misused by government agents or exploited by malicious actors. 

Many observers warn that this vote will set a precedent that could have profound implications for the privacy and democratic freedoms of the continent as a whole if its outcome were to turn out favorably. 

The proposal is called “Chat Control” by its critics, since it requires all messaging platforms operating in Europe to actively scan user conversations, including those that are protected by end-to-end encryption, in search of child sexual abuse material that is well-known and previously unknown. 

In their opinion, such obligations threaten to undermine the very foundations of secure digital communication, creating the possibility of unprecedented levels of monitoring and abuse, which advocates argue could undermine the very foundations of secure digital communication.

The VPN Trust Initiative (VTI), an organisation which represents a group of major VPN providers, has been pushing back strongly against the draft regulation, stating that any attempt to weaken encryption would erode the very basis of the Internet's security. VTI co-chair, Emilija Beranskait, emphasised that "encryption either protects everybody or it doesn't," imploring governments to preserve strong encryption as a cornerstone of privacy, trust, and democratic values, urging them to adopt stronger encryption. 

According to NordVPN's privacy advocate, Laura Tyrylyte, while client-side scanning is indeed a safety and security concern, it is not an acceptable compromise between an organisation's safety and security, contending that solutions must not be compromised in the interest of addressing a single issue alone. 

Moreover, NymVPN's CEO, Harry Halpin, condemned the proposal as “a major step backwards for privacy” and warned that, once normalised, such surveillance tools could be used against journalists, activists, or political opponents. In addition, experts have raised significant technical concerns with the introduction of mandatory scanning mechanisms, stating that such mechanisms will fundamentally undermine the technology underlying online security. 

Moreover, they are concerned that client-side scanning infrastructure could be repurposed so that surveillance is widened far beyond what it was originally intended to do, which runs counter to the European Union's own commitments under initiatives such as the Cyber Resilience Act and efforts to prepare for quantum cryptography in the future. 

However, a deeply divided political debate is ongoing in the EU. Eight member states have formally opposed the proposal, including Germany and Luxembourg, while fifteen others, including France, Italy, and Spain, are still in favour of the proposal. 

There is still some uncertainty regarding the outcome of the October vote because only Estonia, Greece, and Romania have not decided. In addition to the pressure being put on the EU Council, more than 500 cryptography experts and researchers have signed an open letter urging it to reconsider the risks associated with introducing what they consider a dangerous precedent for the future of the digital world in Europe. 

It has been suggested that under the Danish-led proposal, messaging platforms such as WhatsApp, Signal, and ProtonMail would have to scan private communications without discrimination. In their current form, the proposal would violate end-to-end encryption in an irreparable way, according to experts. 

A direct analysis of links, photos, and videos is part of the system that will run directly on the users' devices before messages are encrypted. 

Only government and military accounts are exempt from this analysis, with the draft regulation last circulated to EU delegations on July 24, 2025, claiming to safeguard encryption. Still, privacy specialists are of the opinion that true security cannot be maintained using client-side scanning. 

Laura Tyrylyte, NordVPN's privacy advocate, observed that "Chat Control's client-side scanning provisions create a false choice between security and safety." The solution to one problem, even a serious one like child safety, cannot be at the expense of creating systemic vulnerabilities that are more dangerous to everyone." 

Several other industry leaders expressed similar concerns as well, including Harry Halpin, CEO of NymVPN, who condemned the measure as “a significant step backwards for privacy.” He explained that the indiscriminate scans of private communications are disproportionate in nature, creating a backdoor that could be exploited if it is normalised. 

There is a risk that such infrastructure could easily be redirected towards attacking journalists, political opponents, or activists while also exposing ordinary citizens to hostile cyberattacks. In Halpin's view and the opinion of others, it is more effective to carry out targeted, warrant-based investigations, to take down illegal material swiftly, and to use properly resourced specialist teams rather than universal surveillance as a means of detecting illegal activity. 

However, despite the simple concessions made in the latest draft, such as restricting the detection to visual contents and excluding audio and text, the scientific community has remained steadfast in its criticism regardless of the concessions made. 

The researchers point out that there are four critical flaws to the system: the inability to scan billions of messages accurately; the inevitable weakening of encryption through the monitoring of devices on-device; the high risk that surveillance can expand beyond its stated purpose due to "function creep"; and the danger that mass monitoring in the name of child protection will erode democratic norms. 

While the EU has promised oversight and consent mechanisms, cryptography experts claim that secure and reliable client-side scanning cannot be performed at scale, despite promises of EU oversight and consent mechanisms. This proposal, therefore, is technically flawed as well as politically perilous. 

VPN providers are also signalling that they will not stand on the sidelines if the regulation is passed. Several leading companies, including Mullvad, a popular privacy-focused service, have expressed concern about the possibility of withdrawing from the European market altogether if the proposed legislation is passed. 

If this happens, millions of users will be impacted, and innovation in this field may be curtailed. Similar advocacy groups, including Privacy Guides, have sounded the alarm in the past weeks, warning that the new regulations threaten to undermine the privacy of all citizens, not only those suspected of wrongdoing, and they urge all citizens to take notice before the September 12 deadline. 

A growing number of social media platforms are also being criticised, and voices like Telegram founder Pavel Durov have pointed out that comparable laws have failed in the past, as determined offenders have simply moved to smaller applications or VPNs to avoid these weaker protections, which leaves ordinary users to bear the brunt. 

The debate carries significant economic weight. The Security.org website indicates that more than 75 million Americans already use VPN services to keep their privacy online. As Chat Control advances, this demand is expected to grow rapidly in Europe. As per Future Market Insights, by 2035, the VPN industry is expected to grow to a value of $481.5 billion; however, experts caution that heavy regulation may fragment the market and stifle technological development.

Denmark has continued to lobby for the proposal despite mounting opposition from civil society groups, technology companies, and several member states as the EU Council prepares to vote on October 14, as tensions are increasing. In recent weeks, citizens have taken to online platforms such as X to voice their concerns about the proposed legislation, warning that Europeans would not have fundamentally secure digital privacy. 

Analysts point out that in order to adapt to this changing environment, VPN providers may need to use quantum-resistant technologies faster or explore decentralised models, as highlighted in recent forward-looking studies, which point to the existential stakes of the industry. 

However, one central fear remains across all debates: once surveillance infrastructure is embedded in the environment, its scope is unlikely to be limited to combating child abuse. In their view, it could create a framework for broad and permanent monitoring, reshaping the global norms of digital privacy in a way that undermines both the rights of users and technological innovation in the process. 

A key question to be answered before the EU's vote on October 14 is whether it can successfully balance child protection with its longstanding commitments to privacy and digital rights while maintaining a sense of security. 

It is noted that decisions made in Brussels will have a global impact, potentially setting global standards for how governments deal with encryption, surveillance, and online safety, as experts warn. For legislators, the challenge is to devise effective solutions that protect vulnerable groups without dismantling the secure infrastructures that rely on modern communication, commerce and civic participation. 

One possible path forward, according to observers, could be bolstering cross-border investigative collaboration, strengthening rapid takedown protocols for harmful material, and building specialised law enforcement units which are equipped with advanced tools that are able to target perpetrators rather than citizens collectively, to achieve a better outcome. 

In addition to the fact that private measures would prove better at combating criminal networks, privacy advocates argue that they would also preserve the trust and innovation that Europe has championed for decades, as well as the sense of security that Europe has promoted for decades. 

There will be a clear indication of the EU's global leadership position in safeguarding both child safety and civil liberties through this decision, or whether it will serve as a model for other nations to emulate in terms of surveillance frameworks to maintain secure neighbourhoods.

EU Data Act Compliance Deadline Nears With Three Critical Takeaways


 

A decisive step forward in shaping the future of Europe's digital economy has been taken by the regulation of harmonised rules for fair access to and use of data, commonly known as the EU Data Act, which has moved from a legislative text to a binding document. 

The regulation was first adopted into force on the 11th of January 2024 and came into full effect on the 12th of September 2025, and is regarded as the foundation for the EU’s broader data strategy. Its policymakers believe that this is crucial to the Digital Decade's goal of accelerating digital transformation across industries by ensuring that the data generated within the EU can be shared, accessed, and used more equitably, as a cornerstone of the Digital Decade's ambition. 

The Data Act is not only a technical framework for creating a more equitable digital landscape, but it is also meant to rebalance the balance of power in the digital world, giving rise to new opportunities for innovation while maintaining the integrity of the information. With the implementation of the Data Act in place from 12 September 2025, the regulatory landscape will be dramatically transformed for companies that deal with connected products, digital services, or cloud or other data processing solutions within the European Union, regardless of whether the providers are located within its borders or beyond. 

It seems that businesses were underestimating the scope of the regime before it was enforced, but as a result, the law sets forth a profound set of obligations that go well beyond what was previously known. In essence, this regulation grants digital device and service users unprecedented access rights to the data they generate, regardless of whether that data is personal or otherwise. Until recently, the rights were mostly unregulated, which meant users had unmatched access to data. 

The manufacturer, service provider, and data owner will have to revise existing contractual arrangements in order to comply with this regulation. This will be done by creating a framework for data sharing on fair and transparent terms, as well as ensuring that extensive user entitlements are in place. 

It also imposes new obligations on cloud and processing service providers, requiring them to provide standardised contractual provisions that allow for switching between services. A violation of these requirements will result in a regulatory investigation, civil action, or significant financial penalties, which is the same as a stringent enforcement model used by the General Data Protection Regulation (GDPR), which has already changed the way data practices are handled around the world today. 

According to the EU Data Act, the intention is to revolutionise the way information generated by connected devices and cloud-based services is accessed, managed and exchanged within and across the European Union. In addition to establishing clear rules for access to data, the regulations incorporate obligations to guarantee organisations' service portability, and they embed principles of contractual fairness into business agreements as a result. 

The legislation may have profound long-term consequences, according to industry observers. It is not possible to ignore the impact that the law could have on the digital economy, as Soniya Bopache, vice president and general manager for data compliance at Arctera, pointed out, and she expected that the law would change the dynamics of the use and governance of data for a long time to come. 

It is important to note that the EU Data Act has a broader scope than the technology sector, with implications for industries that include manufacturing, transportation, consumer goods, and cloud computing in addition to the technology sector. Additionally, the regulation is expected to benefit both public and private institutions, emphasising how the regulation has a broad impact. 

Cohesity's vice president and head of technology, Peter Grimmond, commented on the law's potential by suggesting that, by democratising and allowing greater access to data, the law could act as a catalyst for innovation. It was suggested that organisations that already maintain strong compliance and classification procedures will benefit from the Act because it will provide an environment where collaboration can thrive without compromising individual rights or resilience. 

Towards the end of the EU regulation, the concept of data access and transparency was framed as a way to strengthen Europe's data economy and increase competitiveness in the market, according to EU policymakers. It is becoming increasingly evident that connected devices generate unprecedented amounts of information. 

As a result of this legislation, businesses and individuals alike are able to use this data more effectively by granting greater control over the information they produce, which is of great importance to businesses and individuals alike. Additionally, Grimmond said that the new frameworks for data sharing between enterprises are an important driver of long-term benefits for the development of new products, services, and business models, and they will contribute to the long-term development of the economy. 

There is also an important point to be made, which is that the law aims to achieve a balance between the openness of the law and the protected standards that Europe has established, aligned with GDPR's global privacy benchmark, and complementing the Digital Operational Resilience Act (DORA), so that the levels of trust and security are maintained. 

In some ways, the EU Data Act will prove to be even more disruptive than the EU Artificial Intelligence Act, as it will be the most significant overhaul of European data laws since the GDPR and will have a fundamental effect on how businesses handle information collected by connected devices and digital services in the future. 

Essentially, the Regulation is a broad-reaching law that covers both personal data about individuals as well as non-personal data, such as technical and usage information that pertains to virtually every business model associated with digital products and services within the European Union. This law creates new sweeping rights for users, who are entitled to access to the data generated by their connected devices at any time, including real-time, where it is technically feasible, as per Articles 4 and 5. 

Additionally, these rights allow users to determine who else may access such data, whether it be repairers, aftermarket service providers, or even direct competitors, while allowing users to limit how such data is distributed by companies. During the years 2026 and 2030, manufacturers will be required to make sure that products have built-in data accessibility at no extra charge, which will force companies to reconsider their product development cycles, IT infrastructure, and customer contracts in light of this requirement. 

Moreover, the legislation provides guidelines for fair data sharing and stipulates that businesses are required to provide access on reasonable, non-discriminatory terms, and prohibits businesses from stating terms in their contracts that impede or overcharge for access in a way that obstructs it. As a result of this, providers of cloud computing and data processing services face the same transformative obligations as other companies, such as mandatory provisions that allow customers to switch services within 30 days, prohibit excessive exit fees, and insist that contracts be transparent so vendors won't get locked into contracts. 

There are several ways in which these measures could transform fixed-term service contracts into rolling, short-term contracts, which could dramatically alter the business model and competitive dynamics in the cloud industry. The regulation also gives local authorities the right to request data access in cases of emergency or when the public interest requires it, extending its scope beyond purely commercial applications. 

In all Member States, enforcement will be entrusted to national authorities who will be able to impose large fines for non-compliance, as well as provide a new path for collective civil litigation, opening doors to the possibility of mass legal actions similar to class actions in the US. Likely, businesses from a broad range of industries, from repair shops to insurers to logistics providers to AI developers, will all be able to benefit from greater access to operational data. 

In the meantime, sectors such as the energy industry, healthcare, agriculture, and transportation need to be prepared to respond to potential government requests. In total, the Data Act constitutes an important landmark law that rebalances power between companies and users, while redrawing the competitive landscape for Europe's digital economy in the process. In the wake of the EU Data Act's compliance deadline, it will not simply be viewed as a regulatory milestone, but also as a strategic turning point for the digital economy as a whole. 

Business owners must now shift from seeing compliance as an obligation to a means of increasing competitiveness, improving customer trust, and unlocking new value through data-driven innovation to strengthen their competitiveness and deepen customer relationships. In the future, businesses that take proactive steps towards redesigning their products, modernising their IT infrastructure, and cultivating transparent data practices are better positioned to stay ahead of the curve and develop stronger relationships with their users, for whom information is now more in their control. 

Aside from that, the regulation has the potential to accelerate the pace of digital innovation across a wide range of sectors by lowering barriers to switching providers and enforcing fairer contractual standards, stimulating a more dynamic and collaborative marketplace. This Act provides the foundation for a robust public-interest data use system in times of need for governments and regulators. 

In the end, the success of this ambitious framework will rest on how quickly the business world adapts and how effective its methods are at developing a fairer, more transparent, and more competitive European data economy, which can be used as a global benchmark in the future.

Cybersecurity Landscape Shaken as Ransomware Activity Nearly Triples in 2024

 


Ransomware is one of the most persistent threats in the evolving landscape of cybercrime, but its escalation in 2024 has marked an extremely alarming turning point. Infiltrating hospitals, financial institutions, and even government agencies in a manner that has never been attempted before, attackers extended their reach with unprecedented precision, as if they were no longer restricted to high-profile corporations. These sectors tend to be vulnerable to such crippling disruptions in the first place. 

As cybercriminals employed stronger encryption methods and more aggressive extortion tactics, they demonstrated a ruthless pursuit of maximising damages and financial gain. This shift is demonstrated in the newly released data from threat intelligence firm Flashpoint, which reveals that the number of ransomware attacks observed in the first half of 2025 increased by 179 per cent in comparison to 2024 during the same period, almost tripling in size in just a year. 

Throughout the years 2022 and 2023, the ransomware landscape offered little relief due to the relentless escalation of threat actors’ tactics. As a result of the threat of public exposure and data infiltration, attackers increasingly used threats of data infiltration to force companies to conform to regulations. 

Even companies that managed to restore their operations from backups were not spared, as sensitive information was often leaking onto underground forums and leak sites controlled by criminal groups, which led to an increase in ransomware incidence of 13 per cent in 2021 compared to 2021 – an increase far greater than the cumulative increases of the past five years combined. 

Verizon’s Data Breach Investigations Report underscored the severity of this trend. It is important to note that Statista has predicted that about 70 per cent of businesses will face at least one ransomware attack in 2022, marking the highest rate of ransomware attacks ever recorded. In the 2022 year-over-year analysis, it was highlighted that education, government, and healthcare were the industries with the greatest impact in 2022. 

By 2023, healthcare will emerge as one of the most targeted sectors due to attackers' calculated strategy to target industries that are least able to sustain prolonged disruption. In light of the ongoing ransomware crisis, small and mid-sized businesses are considered to be some of the most vulnerable targets. 

As part of Verizon’s research, 832 ransomware-related incidents were documented by small businesses by 2022, 130 of these incidents resulted in confirmed data loss, and nearly 80 per cent of these events were directly related to the ransomware attacks. In an effort to compound the risks, the fact that only half of U.S. small businesses maintain a formal cybersecurity plan, according to a report quoted by UpCity Globally, amplifies the risks. 

A survey conducted by Statista found that 72 per cent of businesses were impacted by ransomware, with 64.9% of those organisations ultimately yielding to ransom demands. In a recent survey of 1,500 cybersecurity professionals conducted by Cyberreason, there was a similar picture of concern. More than two-thirds of all organisations reported experiencing a ransomware attack, a 33 per cent increase over the previous year, with almost two-thirds of the attacks associated with compromised third parties. 

The consequences for organisations were severe and went beyond financial losses in the most significant way. Approximately 40% of companies had to lay off employees following an attack, 35 percent reported resignations of senior executives, and one third temporarily suspended operations as a result of an attack. 

Unfortunately, the persistence of attackers within networks often went undetected for long periods of time. There was a reported 63 per cent of organisations that had been attacked for as long as six months, and others reported that they had been accessed for a period of over a year without being noticed. The majority of companies decided to pay ransoms despite the risks involved, with 49 per cent doing so to avoid revenue losses and 41 per cent to speed up recovery. 

In spite of this, even payment provided no guarantee of data recovery; over half of all companies paying ransom reported corrupted or unusable data after the decryption, while the majority of financial damages were between $1 million and $10 million. The use of generative artificial intelligence within ransomware operations is also an emerging concern. 

Even though the scope of these experiments remains limited, some groups have begun to explore large language models that have the potential to reduce operational burdens, such as automating the generation of phishing templates.To develop a more comprehensive understanding of this capability, researchers have identified Funksec, a group that surfaced in late 2024 and is believed to have contributed to the WormGPT model, as one of the first groups to experiment with it, so more gangs will likely start incorporating artificial intelligence into their tactics in the near future.

Furthermore, analysts at Flashpoint found that gang members are recycling victims from other ransomware groups in order to gain a foothold on underground forums, long after initial breaches. The first half of 2025 has been dominated by a few particularly active operators based on scale: 537 attacks were committed by Akira, 402 attacks were committed by Clop/Cl0p, 345 attacks were committed by Qilin, 233 attacks were committed by Safepay Ransomware, and 23 attacks were performed by RansomHub. 

A significant amount of attention has also been drawn to DragonForce in the United Kingdom after the company targeted household names, including Marks & Spencer and the Co-op Group. Despite being the top target, the United States remained the most vulnerable, with 2,160 attacks, far exceeding Canada’s 249 attacks, Germany’s 154 attacks, and the UK’s 148 attacks—but Brazil, Spain, France, India, and Australia also had high numbers. 

A perspective from the manufacturing and technology industries indicates that these were the industries that were most lucrative, causing 22 and 18 per cent of incidents, respectively. Retail, healthcare, and business services, on the other hand, accounted for 15 per cent. The report also highlighted how the boundaries between hacktivist groups and state-sponsored actors are becoming increasingly blurred, thus illustrating the complexity of today's threat environment. 

During the first half of 2025, 137 threat actor activities tracked were attributed to state-sponsored groups, 9 per cent to hacktivists, while the remaining 51 per cent were attributed to cybercriminal organisations. The Iranian government has shown that a growing focus has been placed on critical infrastructure through entities affiliated with the Iranian state, such as GhostSec and Arabian Ghosts. 

In an attempt to target critical infrastructure, these entities are reported to have targeted programmable logic controllers connected to Israeli media and water systems. As a result, groups such as CyberAv3ngers sought to spread unverified narratives in advance of disruptive technology attacks. As a result, state-aligned operations are often resurfacing under a new identity, such as APT IRAN, demonstrating their shifting strategies and adaptive nature. 

There is a sobering picture of the challenges that lie ahead in light of the increase in ransomware activity as well as the diversification of threat actors. Even though no sector, geography, or organisation size is immune to disruption, it appears that cybercriminals will be able to innovate more rapidly than ever, as well as utilise state-linked tactics to do so in the future, which indicates that the stakes will only get higher as time goes on. 

Proactively managing security goes beyond ensuring compliance or minimising damage; it involves cultivating a culture of security that anticipates threats rather than reacts to them, rather than merely reacting to them. By investing in modern defences like continuous threat intelligence, real-time monitoring, and zero-trust architectures, as well as addressing fundamental weaknesses in supply chains and third-party partnerships, which frequently open themselves up to attacks, companies can significantly reduce their risk exposure as well as their vulnerability to attacks. 

Moreover, it is equally important to address the human aspect of cybersecurity resilience: employees must be aware, incidents should be reported quickly, and leadership needs to be committed to cybersecurity resilience. 

Even though the outlook may seem daunting, organisations that make sure they are prepared rather than complacent will have a better chance of dealing with ransomware as well as the wider range of cyber threats that are reshaping the digital age. A resilient security approach remains the ultimate defence in an environment defined by a persistent attacker and the innovative actions of the attacker.

Britons Risk Privacy by Sharing Sensitive Data with AI Chatbots Despite Security Concerns

 

Nearly one in three individuals in the UK admits to sharing confidential personal details with AI chatbots, such as OpenAI’s ChatGPT, according to new research by cybersecurity firm NymVPN. The study reveals that 30% of Britons have disclosed sensitive data—including banking information and health records—to AI tools, potentially endangering their own privacy and that of others.

Despite 48% of respondents expressing concerns over the safety of AI chatbots, many continue to reveal private details. This habit extends to professional settings, where employees are reportedly sharing internal company and customer information with these platforms.

The findings come amid a wave of high-profile cyberattacks, including the recent breach at Marks & Spencer, which underscores how easily confidential data can be compromised. NymVPN reports that 26% of survey participants have entered financial details related to salaries, mortgages, and investments, while 18% have exposed credit card or bank account numbers. Additionally, 24% acknowledged sharing customer data—such as names and email addresses—and 16% uploaded company financial records and contracts.

“AI tools have rapidly become part of how people work, but we’re seeing a worrying trend where convenience is being prioritized over security,” said Harry Halpin, CEO of NymVPN.

Organizations such as M&S, Co-op, and Adidas have already made headlines for data breaches. “High-profile breaches show how vulnerable even major organizations can be, and the more personal and corporate data that is fed into AI, the bigger the target becomes for cybercriminals,” Halpin added.

With nearly a quarter of people admitting to sharing customer data with AI tools, experts emphasize the urgent need for businesses to establish strict policies governing AI usage at work.

“Employees and businesses urgently need to think about how they’re protecting both personal privacy and company data when using AI tools,” Halpin warned.

Completely avoiding AI chatbots might be the safest option, but it’s not always realistic. Users are advised to refrain from entering sensitive information, adjust privacy settings by disabling chat history, or opt out of model training.

Using a VPN can provide an additional layer of online privacy by encrypting internet traffic and masking IP addresses when accessing AI chatbots like ChatGPT. However, even with a VPN, risks remain if individuals continue to input confidential data.

How Generative AI Is Accelerating the Rise of Shadow IT and Cybersecurity Gaps

 

The emergence of generative AI tools in the workplace has reignited concerns about shadow IT—technology solutions adopted by employees without the knowledge or approval of the IT department. While shadow IT has always posed security challenges, the rapid proliferation of AI tools is intensifying the issue, creating new cybersecurity risks for organizations already struggling with visibility and control. 

Employees now have access to a range of AI-powered tools that can streamline daily tasks, from summarizing text to generating code. However, many of these applications operate outside approved systems and can send sensitive corporate data to third-party cloud environments. This introduces serious privacy concerns and increases the risk of data leakage. Unlike legacy software, generative AI solutions can be downloaded and used with minimal friction, making them harder for IT teams to detect and manage. 

The 2025 State of Cybersecurity Report by Ivanti reveals a critical gap between awareness and preparedness. More than half of IT and security leaders acknowledge the threat posed by software and API vulnerabilities. Yet only about one-third feel fully equipped to deal with these risks. The disparity highlights the disconnect between theory and practice, especially as data visibility becomes increasingly fragmented. 

A significant portion of this problem stems from the lack of integrated data systems. Nearly half of organizations admit they do not have enough insight into the software operating on their networks, hindering informed decision-making. When IT and security departments work in isolation—something 55% of organizations still report—it opens the door for unmonitored tools to slip through unnoticed. 

Generative AI has only added to the complexity. Because these tools operate quickly and independently, they can infiltrate enterprise environments before any formal review process occurs. The result is a patchwork of unverified software that can compromise an organization’s overall security posture. 

Rather than attempting to ban shadow IT altogether—a move unlikely to succeed—companies should focus on improving data visibility and fostering collaboration between departments. Unified platforms that connect IT and security functions are essential. With a shared understanding of tools in use, teams can assess risks and apply controls without stifling innovation. 

Creating a culture of transparency is equally important. Employees should feel comfortable voicing their tech needs instead of finding workarounds. Training programs can help users understand the risks of generative AI and encourage safer choices. 

Ultimately, AI is not the root of the problem—lack of oversight is. As the workplace becomes more AI-driven, addressing shadow IT with strategic visibility and collaboration will be critical to building a strong, future-ready defense.

FDA Warns of Cybersecurity Risks in Contec and Epsimed Patient Monitors

 

The U.S. Food and Drug Administration (FDA) has issued a safety communication highlighting cybersecurity vulnerabilities in certain patient monitors manufactured by Contec and relabeled by Epsimed.

The FDA’s notice, published on Thursday, identifies three critical security flaws that could allow unauthorized access to remote monitoring systems, potentially enabling attackers to manipulate device functions. While no incidents, injuries, or deaths have been reported, the agency is urging patients, healthcare professionals, and IT personnel to implement protective measures.

Contec, a China-based medical device manufacturer, produces the CMS8000 patient monitor, which Epsimed sells under its MN-120 product line. These monitors display vital signs and other critical patient information in both healthcare and home settings.

According to the FDA, the vulnerabilities could permit unauthorized users to remotely control the devices, disrupt functionality, and compromise patient data. A hidden backdoor in the software enables attackers to bypass security controls, potentially leading to data breaches or device malfunctions.

The Cybersecurity and Infrastructure Security Agency (CISA) has also assessed the threat, stating that unauthorized changes to the configuration of CMS8000 and MN-120 monitors pose a significant risk to patient safety. A malfunctioning device could lead to improper medical responses to displayed vital signs.

CISA’s findings indicate that the vulnerabilities exist in all analyzed versions of the software and are considered highly severe. An anonymous researcher first reported the security flaws to CISA, prompting further investigation.

To mitigate risks, the FDA advises IT and cybersecurity staff at healthcare facilities to use local monitoring features exclusively. If a monitor relies on remote access, it should be disconnected immediately. Devices that do not require remote monitoring should be removed from the internet by unplugging ethernet cables and disabling WiFi or cellular connections.

“If you cannot disable the wireless capabilities, then continuing to use the device will expose the device to the backdoor and possible continued patient data exfiltration,” the FDA stated. “Be aware, at this time there is no software patch available to help mitigate this risk.”

This warning comes amid increasing concerns about the security of healthcare data. The Office for Civil Rights reported a more than 100% rise in large-scale data breaches from 2018 to 2023, with the number of individuals impacted soaring by over 1000% during the same period.

Data Breach: Georgia Voter Information Accidentally Displayed Online

 


Despite an effort by the Georgian government to provide a new web portal that allows Georgians to cancel their voter registration, the website has come under fire after a technical problem caused personal data to be displayed on users' screens. It was announced on Monday that Georgia's Secretary of State Brad Raffensperger has launched a new website designed to give Georgians the ability to easily and quickly cancel their voting registrations if they move out of the state, or if they lose a loved one who recently passed away. 

During the registration process, users are asked to enter the first letter of their last name, their county of residence, and their date of birth. It will then ask them to provide a reason for their cancellation, followed by a request to provide their driver's license information. After answering the question, the person is prompted to enter their license number if the answer to the question is yes. 

There is a possibility that the voter will be asked to enter their social security number, if they do not already have one, or they will be asked to complete a form that needs to be mailed or emailed to the registration office for their local county. The problem, which Mike Hassinger, Raffensperger spokesman, said lasted less than an hour and has now been resolved, highlighted Democratic concerns that the site could be used by outsiders to unjustifiably cancel voter registrations without the voter's permission. 

There is another example of how states should be aggressive in purging their registration rolls of invalid names. In Georgia, there has been a long-running dispute between Democrats and Republicans over this issue, but it has recently gained new urgency because of an extensive national effort coordinated by Trump party allies to remove names from voter rolls that have garnered new attention. 

There are activists inflamed by the false allegations that the 2020 election was stolen, and they are arguing that the state's existing efforts to clean it up are inadequate and that the inaccuracies invite fraud to take place. In Georgia, as well as throughout the country, there have been very few cases of voters casting ballots improperly from out of state. To counter efforts by disinformation campaigns that are aimed at making people distrust the democratic process, four prominent former government officials from Georgia have joined an organization that is hoping to counter the efforts of disinformation campaigns. 

Despite the launch of the Democracy Defense Project, which was announced by Georgia Republican lawmakers Nathan Deal and Saxby Chambliss, and once again by two Democrat politicians, Roy Barnes the former governor of Georgia, and Shirley Franklin the former mayor of Atlanta, the project seems to have picked up two Georgia Republicans and two Democrats. The Georgia board members are part of a national initiative that aims to raise money for advertisements so that they can push back against efforts to undermine elections and to get people to move beyond talking about "polarizing rhetoric" to increase their chances of getting news coverage and raising votes. 

A new skirmish has arisen over the issue of how aggressively states should purge incorrectly registered citizens from their registration rolls. Democrat and Republican congressional leaders in Georgia have been engaged in a bitter and protracted battle over this issue, but the debate has now gained new urgency due to a campaign launched by Donald Trump's allies to remove names from the voter rolls on a national level. 

According to activists fueled by Trump's false claims that the 2020 election was rigged, there is no way to clean up the mess in an accurate way, and inaccuracies invite fraud into the process. Neither in Georgia nor nationwide have there been any instances of improper out-of-state voting that can be verified scientifically. There have been relatively few cancellations of registrations to date. Typically, cancelling a voter registration in Georgia requires mailing or emailing a form to the county where the voter previously resided. 

The removal of deceased individuals or those convicted of felonies from the voter rolls can be processed relatively swiftly. However, when individuals relocate and do not request the cancellation of their registration, it may take years for them to be removed from the rolls. The state must send mail to those who appear to have moved, and if there is no response, these individuals are moved to inactive status. Despite this, they retain the ability to vote, and their registration is not removed unless they fail to vote in the next two federal general elections. 

Georgia has over 8 million registered voters, including 900,000 classified as inactive. Similar to other states, Georgia allows citizens to challenge an individual's eligibility to vote, particularly when there is personal knowledge of a neighbour moving out of state. Recently, however, residents have increasingly been using impersonal data, such as the National Change of Address list maintained by the U.S. Postal Service, to challenge large numbers of voters. Additionally, some individuals scrutinize the voter rolls to identify people registered at non-residential addresses. 

For instance, a Texas group called True the Vote challenged 364,000 Georgia voters before the two U.S. Senate runoffs in 2021. Since then, approximately 100,000 more challenges have been filed by various individuals and groups. Voters or relatives of deceased individuals can enter personal information on a website to cancel registrations. County officials receive notifications from the state's computer system to remove these voters, and counties will send verification letters to voters who cancel their registrations.

If personal information is unavailable, the system offers a blank copy of a sworn statement of cancellation. However, for a brief period after the website was unveiled, the system inadvertently preprinted the voter's name, address, birth date, driver's license number, and the last four digits of their Social Security number on the affidavit. This error allowed anyone with access to this information to cancel a registration without sending in the sworn statement. 

Butler expressed her alarm, stating she was "terrified" to discover that such sensitive information could be accessed with just a person's name, date of birth, and county of registration. Hassinger explained in a Tuesday statement that the temporary error was likely due to a scheduled software update, and it was detected and resolved within an hour. 

Although Butler commended the swift action by Raffensperger's office, she, along with other Democrats, argued that this issue highlighted the potential for the site to be exploited by external parties to cancel voter registrations. Democratic Party of Georgia Executive Director Tolulope Kevin Olasanoye emphasized that the portal could be misused by right-wing activists already engaged in mass voter challenges to disenfranchise Georgians. Olasanoye called on Raffensperger to disable the website to prevent further abuse.