Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label cybersecurity risks. Show all posts

Old Espionage Techniques Power New Cyber Attacks by Charming Kitten Hackers


 

As zero-day exploits and increasingly sophisticated malware become a norm, a quieter and more calculated threat is beginning to gain momentum - one which relies less on breaking systems than it does on destroying trust. 

In recent months, there have been significant developments in Iran-linked cyber activities, where groups such as Charming Kitten are abandoning conventional vulnerability-driven attacks for deception, psychological manipulation, and carefully orchestrated human interaction. 

Instead of forcing entry through technical loopholes, these actors embed themselves within the digital lives of their targets, posing as credible contacts and cultivating familiarity over time. As a platform-agnostic organization, their operations are both available on macOS and Windows, demonstrating a commitment to maximizing access over exploitative efforts. 

While this occurs, emerging concerns regarding insider-driven data exposure, including allegations of covert methods such as photographing sensitive screens to bypass monitoring systems, underscore a broader reality indicating that the most critical vulnerabilities are no longer associated with code, but with human behavior.

These operations are being carried out by Charming Kitten, a threat group widely linked to Iran's security establishment that has targeted government officials, academic researchers, and corporate employees since its establishment in 2010. As a primary attack vector, the group uses identity deception, impersonating known contacts through convincingly engineered communication to obtain credentials or launch malware, rather than exploiting software flaws or exploit chains. 

As an intentional alignment with traditional intelligence tradecraft, the methodology provides deeper access than purely technical intrusion techniques by cultivating trust and controlling interaction. For this reason, operatives construct layered digital personas based on professional credibility or social engagement as part of this effort and establish rapport with target audiences before executing phishing attacks or delivering payloads.

Using a human-centered approach, it is consistently effective across both Apple and Microsoft environments without relying on platform-specific vulnerabilities, so its effectiveness is consistent across both environments. 

Additionally, insider risk concerns have been intensified in parallel, as investigations indicate the possibility of individuals inside major technology organizations facilitating data exposure through low detection techniques, including the capture of sensitive information physically, thus circumventing conventional cybersecurity controls and reinforcing the complexity of modern threat environments. 

The threat landscape has begun to reflect a more sophisticated approach to visibility and restraint as a result of these targeted intrusion campaigns, in addition to a broader pattern of Iranian-related cyber activity.

In many cases, the activity observed at present has a low level of immediate operational severity, ranging from website defacements and disruptions of distributed denial-of-service to phishing waves, coordinated influence messaging, and reconnaissance of externally exposed infrastructures. These actions, however, are rarely isolated or symbolic; historically, they have served as early indicators of intent, which have enabled the testing of defenses, signaling capabilities, and forming of the operational environment in advance of sustained or covert engagements. 

In extensive and highly adaptable ecosystem is responsible for enabling this activity, which consists of state-aligned advanced persistent threat groups, semi-autonomous proxies, hacktivist fronts, and loosely aligned external collectives. While these actors usually lack overt coordination during periods of geopolitical tension, they are often aligned in their targeting priorities and narrative framing, resulting in disruptive noise and intelligence-driven precision. 

Developing regional dynamics provides the opportunity for this structure to be scalable and implausibly deniable for escalation, particularly in the context of entities in regions aligned with U.S. or Israeli interests. In sectors such as critical infrastructure, energy, telecommunications, logistics, and public administration, high value targets are encountered.

It is important to note that Iran's cyber strategy does not adhere to a single, publicly defined doctrine, but rather represents a pragmatic extension of its broader asymmetric security approach. During the last decade, cyber capabilities have evolved into multipurpose instruments that can be used for intelligence collection, domestic oversight, retaliatory signaling, as well as regional influence. 

The concept of cyber activity is less of a distinct domain within this framework as it is an integral part of statecraft that is designed to operate beneath the threshold of conventional conflict while delivering strategic outcomes. 

Through the surveillance and disruption of opposition networks, it can be applied to strengthen internal regime stability, extract political and economic advantage, and project coercive influence by imposing calculated costs on adversaries while maintaining deniability to achieve political and economic advantage. 

Increasingly, modern cyber operations are being characterized by a convergence of intent and capability which underscores a threat model that incorporates technical intrusions, psychological manipulation, and geopolitical signaling as integral components. These methods are reminiscent of intelligence practices historically associated with Cold War espionage, when cultivating access through trust led to more lasting results than purely technical advancement. 

The current threat landscape operationalizes this principle through the creation of highly curated digital identities that are frequently designed to appear credible or socially engaging. By establishing rapport with their target, adversaries are able to harvest credentials or deliver malware. 

The human-centered intrusion model is independent of platform-specific vulnerabilities and has demonstrated sustained effectiveness across both the Apple and Microsoft ecosystems Nevertheless, parallel concerns have emerged regarding insider risk. 

Investigations have shown that individuals embedded within technology environments can facilitate data exposure through deliberately low-tech methods, such as taking photographs directly from screens, to circumvent conventional monitoring methods. It is a common statement among security practitioners that trusted access remains one of the most difficult vectors to combat, often bypassing even mature security architectures. 

According to analysts, these patterns are not isolated incidents but are part of an integrated intelligence framework integrating cyber operations with human networks, surveillance, and strategic recruitment pipelines. 

In accordance with former Iranian officials, Iran has developed a multi-layered operational model encompassing online intelligence collection, asset cultivation, and procurement mechanisms, which together increase Iran's reach and resilience. It is widely recognized that Iran is a highly sophisticated adversary with the potential to blend psychological operations with technical intrusion, despite historically being overshadowed by larger cyber powers. 

Moreover, the same operational networks have been used to monitor dissident communities beyond national borders, indicating a dual-purpose strategy extending beyond conventional state competition into internal control mechanisms as well. In the context of increasing blurring boundaries between external intelligence gathering and domestic influence operations, attribution and intent assessment become more difficult. 

Several high-profile cases involving alleged insider cooperation further underscore the enduring threat that is posed by human-mediated compromise. Mitigation therefore requires a rigorous, layered security posture that addresses technical as well as behavioral vulnerabilities. Prior to sharing sensitive information, it remains imperative to verify digital identities, particularly in environments susceptible to targeted social engineering schemes. 

By combining strong, unique credentials with multi-factor authentication, it is significantly less likely that a compromised account will occur, while regular updating of antivirus software and endpoint protection solutions provides a baseline level of security.

As part of active network defense, such as properly configured firewalls, unauthorized access pathways can be further limited, and the use of reputable malware detection and remediation tools makes it possible to identify and contain suspicious activity early. These measures reinforce the principle that effective cybersecurity no longer involves merely technological controls, but rather a combination of user awareness, operational vigilance, and adaptive defense strategies.

Increasingly, threat actors are implementing operations that blur the line between human intelligence and cyber intrusion, requiring organizations to increase their focus on resilience beyond perimeter defenses. 

To detect subtle indicators of compromise that do not evade conventional controls, strategic investments in behavioral monitoring, identity governance, and continuous threat intelligence integration will be essential. It is clear that preparedness has evolved from being able to detect and avoid every breach, but rather from being able to anticipate, detect, and respond with precision to adversaries that utilize both systems and human trust to carry out their attacks.

Apple Reinforces Digital Privacy for Users Without Restricting Law Enforcement Oversight


 

The company has long positioned its privacy architecture as a defining aspect of its ecosystem, marketing it as more than a feature, but a fundamental right built into its products as well. However, the latest disclosures emerging from US legal proceedings suggest that privacy boundaries are neither absolute nor impermeable, and that a more nuanced reality emerges. 

It is the "Hide My Email" function that is under scrutiny, a tool designed to hide users' real email addresses from third-party apps and websites. Despite its success in minimizing commercial tracking and unsolicited exposure, recent legal revelations indicate that this layer of anonymity can be effectively reversed under lawful authority to ensure effectiveness. 

Moreover, the development highlights the important distinction between consumer privacy assurances and judicial obligations imposed by technology companies, reframing conditional anonymity as a controlled filter operating within clearly defined legal limits rather than as a cloak of invisibility. 

Subsequent disclosures from investigative proceedings provide additional insight into how this conditional anonymity works in practice. Apple has received a request from federal authorities, including the Federal Bureau of Investigation, for subscriber information regarding a threatening communication directed at Alexis Wilkins, a person who was reported to have been associated with FBI Director Kash Patel.

According to the warrant application, Apple was able to correlate the anonymized "Hide My Email" alias to a specific user account by providing details on subscriber identification along with a wider dataset that contained over a hundred additional aliases created under the same profile. It was found that Homeland Security Investigations investigated an alleged identity fraud operation in a similar manner, in which multiple masked email identities were linked to Apple accounts under underlying identity fraud schemes, allowing investigators to consolidate disparate digital footprints into one framework for attribution. 

Collectively, these examples reveal an important structural aspect of Apple's ecosystem: while certain layers of iCloud services are protected by end-to-end encryption, a portion of account and communication information is still accessible under valid legal processes. Despite the fact that subscriber information, including names, billing credentials, and associated identifiers, remains within the compliance boundary rather than a cryptographic boundary, which does not contain end-to-end encryption of the content. 

The delineation reinforces an issue of broader significance to the industry, in which conventional email infrastructure is built without pervasive encryption safeguards, making it inherently vulnerable to lawful interception by its users. It is against this backdrop that privacy-conscious individuals are increasingly turning to platforms such as Signal, which offer default end-to-end encryption and minimal data retention. 

As for Apple, it has not responded directly to these developments, although the disclosures have prompted a review of how privacy assurances are communicated and understood within technologically advanced and legally obligated environments. A sustained increase in government access requests against major technology providers is reflective of the context in which these disclosures are made. 

According to Apple's transparency data, it processed more than 13,000 such requests for customer information during the first half of 2025, with email-related records contributing significantly to account attribution, threat analysis, and criminal investigations due to their evidentiary value. Nevertheless, this dynamic is not limited to Apple's ecosystem.

Similar constraints exist among providers such as Google and Microsoft, where legacy email protocols - architected in an era before modern encryption standards - continue to limit the amount of privacy protection inherent within their systems. Although niche services such as Proton have attempted to address this issue by implementing end-to-end encryption by design, their adoption remains marginal relative to the global email user base, which underscores the persistence of structurally exposed communication channels within this environment. 

Apple’s position is especially interesting in light of the divergence between its privacy-oriented messaging and its email infrastructure's technical realities. Hide My Email provides demonstrably reduced exposure to commercial tracking and data aggregation, however it does not alter the underlying compliance model governing lawful data access. 

The distinction has re-ignited an ongoing policy debate around encryption, a controversy Apple has previously encountered with the use of iMessage and other Apple services. Regulations and law enforcement agencies contend that inaccessible communications impede legitimate investigations, and extending comparable end-to-end encryption to iCloud Mail may result in renewed friction.

In contrast, privacy advocates contend that any lowering of encryption standards introduces systemic security risks. Thus, email privacy remains a compromise governed both by legal frameworks as well as engineering decisions at present. 

It is common for users seeking stronger privacy to rely on specialized encryption platforms, but such platforms present usability constraints and interoperability challenges with the larger email ecosystem. There is an important distinction to be drawn from recent federal requests: privacy controls designed to limit the visibility of corporate data do not automatically ensure that government access is restricted. 

The implementation of Apple's products is within this boundary, balancing user expectations with statutory obligations. However, there remains a considerable gap between perceptions and operational realities that calls for reevaluation. It is unclear if the company will extend its end-to-end encryption model to email services, particularly in light of the political and regulatory implications of such a shift. 

It is important to note that privacy is not a binary guarantee, but rather a layered construct that is shaped by both technical design and legal jurisdiction as a result of the developments. As such, organizations and individuals alike should reassess their threat models, identifying clearly between protections required for sensitive communications as opposed to protections against commercial data exposure. 

In cases where confidentiality is extremely important, standard email services may be insufficient, which necessitates selective adoption of stronger encryption techniques, secure communication channels, and disciplined data handling procedures. As a result of clear, and often misunderstood, boundaries within which privacy features operate, informed usage remains the most reliable safeguard in an environment where privacy features operate within clearly defined boundaries.

Chinese Tech Leaders See 66 Billion Erased as AI Pressures Intensify

 


Throughout the past year, artificial intelligence has served more as a compelling narrative than a defined revenue stream – one that has steadily inflated expectations across global technology markets. As Alibaba Group Holdings Ltd and Tencent Holdings Ltd encountered an unexpected turn, the narrative was brought to an end.

During a single trading day, the combined market value of the companies declined by approximately $66 billion. There was no single operational error responsible for the abrupt reversal, but a growing sense of unease among investors who had aggressively positioned themselves to benefit from AI-driven profitability. However, they were instead faced with strategic ambiguity.

In spite of significant advancements and high-profile commitments to artificial intelligence, both companies have not been able to articulate a credible and concrete path for monetization despite significant advances and high-profile commitments.

A market reaction like this point to a broader shift in sentiment that suggests the era of rewarding ambition alone has given way to a more rigorous focus on execution, clarity, and measurable results in the rapidly evolving field of artificial intelligence. In spite of the pressure on fundamentals, the market’s skepticism has only grown. 

Alibaba Group Holdings Ltd. reported a significant 67% contraction in net income in its latest quarterly results, reflecting a convergence of structural and strategic strains rather than a single disruption. In a time when underlying consumer demand remains uneven, the increased capital allocation towards artificial intelligence, including compute infrastructure, model development, and ecosystem expansion, is beginning to affect margins materially. 

As a result of this dual burden, the company’s near-term profitability profile has been complicated, which reinforces analyst concerns that sentiment will not stabilize unless AI can be demonstrated to generate incremental, recurring revenue streams. Added to this, Alibaba has announced plans to invest over $53 billion in infrastructure, along with an aspirational target of generating $100 billion in combined cloud and AI revenues within five years. 

Although this indicates scale, it lacks specificity. As a result of the absence of defined timelines, product roadmaps, and monetization mechanisms, markets are becoming increasingly reluctant to discount the degree of uncertainty created. It appears that investors are recalibrating their tolerance of long-term payoffs in a capital-intensive industry that is inherently back-loaded, putting more emphasis on visibility of execution and measurable milestones rather than long-term payoffs. 

Without such alignment, the company's narrative on AI could be perceived as more of a budgetary expenditure cycle rather than a growth engine, further anchoring cautious sentiment. Tencent Holdings Ltd.'s market movements across China's technology sector demonstrate the rapid shift from optimism to recalibration. 

Several days after the company's market value was eroded by approximately $43 billion in one trading session, Alibaba Group Holdings Ltd. recovered. In addition to an additional $23 billion decline in its US-listed stock, its Hong Kong-listed stock also suffered a 7.3% decline. It would appear that these movements echo a broader re-evaluation of valuation assumptions that had been boosted by heightened expectations regarding artificial intelligence-driven growth, until recently. 

Among the factors contributing to this reversal are the rapid unwinding of the speculative surge that occurred earlier in the month, sparked by the viral adoption of OpenClaw, an agentic artificial intelligence platform that captured public imagination with its promises of automating mundane, time-consuming tasks such as managing emails and coordinating travel arrangements. 

Following the Lunar New Year, consumers' enthusiasm increased following the holiday season, resulting in an acceleration in product releases across the sector. Emerging players, such as MiniMax Group Inc., and established incumbents, such as Baidu Inc., introduced competing products and services rapidly, reinforcing the narrative of imminent transformation based on artificial intelligence. 

Tencent's shares soared by over 10% during this period as investor enthusiasm surrounded its own OpenClaw-related initiatives propelled its share price. However, as initial excitement faded, it became increasingly apparent that the rapid proliferation of products was not consistent with clearly defined monetization pathways.

Markets seem to be beginning to differentiate between technological momentum and sustainable economic value as a consequence of the pullback, an inflection point which continues to influence the trajectory of China's leading technology companies within an ever-evolving artificial intelligence environment. 
As a result of the intense competition underpinning China’s AI expansion, the investment narrative has been further complicated. In addition to emerging companies such as MiniMax Group Inc., there are established incumbents such as Baidu Inc.

As a result of the surge in demand, Tencent Holdings Ltd. was the fastest company to roll out AI-based services and applications. With its extensive user database and its control over a vast digital ecosystem, WeChat emerges as a perceived structural beneficiary. Such positioning is widely considered advantageous in the development of agentic AI systems, which rely heavily on access to granular user-level data, such as communication patterns and behavioral signals, to achieve optimal performance. 

Although these inherent advantages exist, investor confidence has been tempered by a lack of operational clarity, despite these inherent advantages. Tencent's management did not articulate specific monetization frameworks, capital allocation thresholds, or product roadmaps in the post-earnings discussions that could translate its ecosystem strengths into scalable revenue streams after earnings. 

Consequently, institutional sentiment has been influenced by the lack of detail, which has prompted valuation models to be recalibrated. A significant downward revision was made by Morgan Stanley, which cited expectations that front-loaded AI investments will continue to put pressure on margins, with profit growth likely to trail revenue growth in the medium term. 

Similarly, Alibaba Group Holding Ltd. is experiencing a parallel dynamic, where strategic imperatives to lead artificial general intelligence development are increasingly intertwining with operational challenges. It has been aggressively deploying capital in order to position itself at the forefront of China's artificial intelligence race, committed to committing more than $53 billion to infrastructure and aiming to generate $100 billion in cloud and AI revenues within the next five years. 

However, it is also experiencing a deceleration in its traditional e-commerce segment as domestic competition intensifies. The company has responded to this by operationalizing aspects of its artificial intelligence portfolio, which have included the introduction of enterprise-focused agentic solutions, such as Wukong, as well as pricing adjustments across its cloud and storage services, resulting in a 34% increase in cloud and storage prices. However, escalating costs remain a barrier to sustainable returns. 

The recent Lunar New Year period has seen major technology firms, including Alibaba, Tencent, ByteDance Ltd., and Baidu, engage in aggressive user acquisition campaigns, distributing billions of dollars in subsidies and incentives in order to stimulate adoption of consumer-facing AI software. 

Although such measures have contributed to short-term engagement gains, they also indicate a trend in which customer acquisition and retention are being subsidized at scale, raising questions about the longevity of unit economics.

In light of the increasing capital intensity across both infrastructure and user growth fronts, it is becoming increasingly necessary for the sector to exercise discipline and demonstrate tangible financial results in order to transition from experimentation to monetization. A key objective of this episode is not to collapse the AI thesis, but rather to reevaluate the way in which its value is assessed and realized. 

A transition from capability building to disciplined commercialization will likely be required for China's leading technology firms in the future, where technical innovation is closely coupled with viable business models and measurable financial outcomes. The investor community is increasingly focused on metrics such as revenue attribution from artificial intelligence services, margin resilience as computing costs rise, and the scalability of enterprise-focused and consumer-facing deployments.

 The importance of strategic clarity will be as strong as technological leadership in this environment. As a result of transparent investment timelines, product differentiation, and sustainable unit economics, companies that are able to articulate coherent monetization frameworks are more apt to restore confidence and justify continued capital inflows. 

As global markets adopt a more selective approach to AI-driven growth narratives, prolonged ambiguity is also likely to extend valuation pressure. Thus, the future will not be determined solely by innovation pace, but also by the ability of the industry to convert its innovations into durable, repeatable sources of value for the industry as a whole.

Malicious Chrome Extensions Target Enterprise HR and ERP Platforms to Steal Credentials

 

One after another, suspicious Chrome add-ons began appearing under false pretenses - each masquerading as helpful utilities. These were pulled from public view only after Socket, a cybersecurity group, traced them back to a single pattern of abuse. Instead of boosting efficiency, they harvested data from corporate systems like Workday, NetSuite, and SAP SuccessFactors. Installation counts climbed past 2,300 across five distinct apps before takedown. Behind the scenes, threat actors leveraged legitimate-looking interfaces to gain access where it mattered most. 

One investigation found that certain browser add-ons aimed to breach corporate systems, either by capturing login details or disrupting protective measures. Though appearing under distinct titles and author profiles, these tools carried matching coding patterns, operational frameworks, and selection methods - pointing to coordination behind their release. A person using the handle databycloud1104 was linked to four of them; another version emerged through a separate label called Software Access. 

Appearing alongside standard business applications, these extensions asked for permissions typical of corporate tools. One moment they promised better control over company accounts, the next they emphasized locking down admin functions. Positioned as productivity aids, several highlighted dashboard interfaces meant to streamline operations across teams. Instead of standing out, their behavior mirrored genuine enterprise solutions. Claiming to boost efficiency or tighten security, each framed its purpose around workplace demands. Not every feature list matched actual functionality, yet on the surface everything seemed aligned with professional needs. 

Yet the investigation revealed every extension hid its actual operations. Although privacy notices were present, they omitted details about gathering user data, retrieving login information, or tracking admin actions. Without visibility, these tools carried out harmful behaviors - such as stealing authentication cookies, altering webpage elements, or taking over active sessions - all while appearing legitimate. What seemed harmless operated differently beneath the surface. 

Repeated extraction of authentication cookies called "__session" occurred across multiple extensions. Despite user logout actions, those credentials kept reaching external servers controlled by attackers. Access to corporate systems remained uninterrupted due to timed transmissions. Traditional sign-in protections failed because live session data was continuously harvested elsewhere. 

Notably, two add-ons - Tool Access 11 and Data By Cloud 2 - took more aggressive steps. Instead of merely monitoring, they interfered directly with key security areas in Workday. Through recognition of page titles, these tools erased information or rerouted admins before reaching control panels. Pages related to login rules appeared blank or led elsewhere. Controls involving active sessions faced similar disruptions. Even IP-based safeguards vanished unexpectedly. Managing passwords became problematic under their influence. Deactivating compromised accounts grew harder. Audit trails for suspicious activity disappeared without notice. As a result, teams lost vital ground when trying to spot intrusions or contain damage. 

What stood out was the Software Access extension’s ability to handle cookies in both directions. Not only did it take cookies from users, but also inserted ones provided by attackers straight into browsers. Because of this, unauthorized individuals gained access to active sessions - no login details or extra verification steps required. The outcome? Full control over corporate accounts within moments. 

Even with few users impacted, Socket highlighted how compromised business logins might enable wider intrusions - such as spreading ransomware or extracting major datasets. After the discovery, the company alerted Google; soon after, the malicious add-ons vanished from the Chrome Web Store. Those who downloaded them should inform internal security staff while resetting access codes across exposed systems to reduce exposure. Though limited in reach, the breach carries serious downstream implications if left unchecked.

AWS CodeBuild Misconfiguration Could Have Enabled Full GitHub Repository Takeover

 

One mistake in how Amazon Web Services set up its CodeBuild tool might have let hackers grab control of official AWS GitHub accounts. That access could spill into more parts of AWS, opening doors for wide-reaching attacks on software supplies. Cloud security team Wiz found the weak spot and called it CodeBreach. They told AWS about it on August 25, 2025. Fixes arrived by September that year. Experts say key pieces inside AWS were at stake - like the popular JavaScript SDK developers rely on every day. 

Into trusted repositories, attackers might have slipped harmful code thanks to CodeBreach, said Wiz team members Yuval Avrahami and Nir Ohfeld. If exploited, many apps using AWS SDKs could face consequences - possibly even disruptions in how the AWS Console functions or risks within user setups. Not a bug inside CodeBuild caused this, but gaps found deeper in automated build processes. These weak spots lived where tools merge and deploy code automatically. 

Something went wrong because the webhook filters had been set up incorrectly. They’re supposed to decide which GitHub actions get permission to start CodeBuild tasks. Only certain people or selected branches should be allowed through, keeping unsafe code changes out of high-access areas. But in a few open-source projects run by AWS, the rules meant to check user IDs didn’t work right. The patterns written to match those users failed at their job. 

Notably, some repositories used regex patterns missing boundary markers at beginning or end, leading to incomplete matches rather than full validation. This gap meant a GitHub user identifier only needed to include an authorized maintainer's number within a larger sequence to slip through. Because GitHub hands out IDs in order, those at Wiz showed how likely it became for upcoming identifiers to accidentally align with known legitimate ones. 

Ahead of any manual effort, bots made it possible to spam GitHub App setups nonstop. One after another, these fake apps rolled out - just waiting for a specific ID pattern to slip through broken checks. When the right match appeared, everything changed quietly. A hidden workflow fired up inside CodeBuild, pulled from what should have stayed locked down. Secrets spilled into logs nobody monitored closely. For aws-sdk-js-v3, that leak handed total control away - tied straight to a powerful token meant to stay private. If hackers gained that much control, they might slip harmful code into secure branches without warning. 

Malicious changes could get approved through rigged pull requests, while hidden data stored in the repo gets quietly pulled out. Once inside, corrupted updates might travel unnoticed through trusted AWS libraries to users relying on them. AWS eventually confirmed some repos lacked tight webhook checks. Still, they noted only certain setups were exposed. 

Now fixed, Amazon says it adjusted those flawed settings. Exposed keys were swapped out, safeguards tightened around building software. Evidence shows CodeBreach wasn’t used by attackers, the firm added. Yet specialists warn - small gaps in automated pipelines might lead to big problems down the line. Now worries grow around CI/CD safety, a new report adds fuel. 

Lately, studies have revealed that poorly set up GitHub Actions might spill sensitive tokens. This mistake lets hackers gain higher permissions in large open-source efforts. What we’re seeing shows tighter checks matter. Running on minimal needed access helps too. How unknown data is processed in builds turns out to be critical. Each step shapes whether systems stay secure.

AsyncRAT Campaign Abuses Cloudflare Services to Hide Malware Operations

 

Cybercriminals distributing the AsyncRAT remote access trojan are exploiting Cloudflare’s free-tier services and TryCloudflare tunneling domains to conceal malicious infrastructure behind widely trusted platforms. By hosting WebDAV servers through Cloudflare, attackers are able to mask command-and-control activity, making detection significantly more difficult for conventional security tools that often whitelist Cloudflare traffic. 

The campaign typically begins with phishing emails that contain Dropbox links. These links deliver files using double extensions, such as .pdf.url, which are designed to mislead recipients into believing they are opening legitimate documents. When the files are opened, victims unknowingly download multi-stage scripts from TryCloudflare domains. At the same time, a genuine PDF document is displayed to reduce suspicion and delay user awareness of malicious activity. 

A notable aspect of this operation is the attackers’ use of legitimate software sources. The malware chain includes downloading official Python distributions directly from Python.org. Once installed, a full Python environment is set up on the compromised system. This environment is then leveraged to execute advanced code injection techniques, specifically targeting the Windows explorer.exe process, allowing the malware to run stealthily within a trusted system component. 

To maintain long-term access, the attackers rely on multiple persistence mechanisms. These include placing scripts such as ahke.bat and olsm.bat in Windows startup folders so they automatically execute when a user logs in. The campaign also uses WebDAV mounting to sustain communication with command-and-control servers hosted through Cloudflare tunnels. 

The threat actors heavily employ so-called “living-off-the-land” techniques, abusing built-in Windows tools such as PowerShell, Windows Script Host, and other native utilities. By blending malicious behavior with legitimate system operations, the attackers further complicate detection and analysis, as their activity closely resembles normal administrative actions. 

According to research cited by Trend Micro, the use of Cloudflare’s infrastructure creates a significant blind spot for many security solutions. Domains containing “trycloudflare.com” often appear trustworthy, allowing AsyncRAT payloads to be delivered without triggering immediate alerts. This abuse of reputable services highlights how attackers increasingly rely on legitimate platforms to scale operations and evade defenses. 

Security researchers warn that although known malicious repositories and infrastructure may be taken down, similar campaigns are likely to reappear using new domains and delivery methods. Monitoring WebDAV connections, scrutinizing traffic involving TryCloudflare domains, and closely analyzing phishing attachments remain critical steps in identifying and mitigating AsyncRAT infections.

A Year of Unprecedented Cybersecurity Incidents Redefined Global Risk in 2025

 

The year 2025 marked a turning point in the global cybersecurity landscape, with the scale, frequency, and impact of attacks surpassing anything seen before. Across governments, enterprises, and critical infrastructure, breaches were no longer isolated technical failures but events with lasting economic, political, and social consequences. The year served as a stark reminder that digital systems underpinning modern life remain deeply vulnerable to both state-backed and financially motivated actors. 

Government systems emerged as some of the most heavily targeted environments. In the United States, multiple federal agencies suffered intrusions throughout the year, including departments responsible for financial oversight and national security. Exploited software vulnerabilities enabled attackers to gain access to sensitive systems, while foreign threat actors were reported to have siphoned sealed judicial records from court filing platforms. The most damaging episode involved widespread unauthorized access to federal databases, resulting in what experts described as the largest exposure of U.S. government data to date. Legal analysts warned that violations of established security protocols could carry long-term legal and national security ramifications. 

The private sector faced equally severe challenges, particularly from organized ransomware and extortion groups. One of the most disruptive campaigns involved attackers exploiting a previously unknown flaw in widely used enterprise business software. By silently accessing systems months before detection, the group extracted vast quantities of sensitive employee and executive data from organizations across education, healthcare, media, and corporate sectors. When victims were finally alerted, many were confronted with ransom demands accompanied by proof of stolen personal information, highlighting the growing sophistication of data-driven extortion tactics. 

Cloud ecosystems also proved to be a major point of exposure. A series of downstream breaches at technology service providers resulted in the theft of approximately one billion records stored within enterprise cloud platforms. By compromising vendors with privileged access, attackers were able to reach data belonging to some of the world’s largest technology companies. The stolen information was later advertised on leak sites, with new victims continuing to surface long after the initial disclosures, underscoring the cascading risks of interconnected software supply chains. 

In the United Kingdom, cyberattacks moved beyond data theft and into large-scale operational disruption. Retailers experienced outages and customer data losses that temporarily crippled supply chains. The most economically damaging incident struck a major automotive manufacturer, halting production for months and triggering financial distress across its supplier network. The economic fallout was so severe that government intervention was required to stabilize the workforce and prevent wider industrial collapse, signaling how cyber incidents can now pose systemic economic threats. 

Asia was not spared from escalating cyber risk. South Korea experienced near-monthly breaches affecting telecom providers, technology firms, and online retail platforms. Tens of millions of citizens had personal data exposed due to prolonged undetected intrusions and inadequate data protection practices. In one of the year’s most consequential incidents, a major retailer suffered months of unauthorized data extraction before discovery, ultimately leading to executive resignations and public scrutiny over corporate accountability. 

Collectively, the events of 2025 demonstrated that cybersecurity failures now carry consequences far beyond IT departments. Disruption, rather than data theft alone, has become a powerful weapon, forcing governments and organizations worldwide to reassess resilience, accountability, and the true cost of digital insecurity.

Iranian Infy Prince of Persia Cyber Espionage Campaign Resurfaces

 

Security researchers have identified renewed cyber activity linked to an Iranian threat actor known as Infy, also referred to as Prince of Persia, marking the group’s re-emergence nearly five years after its last widely reported operations in Europe and the Middle East. According to SafeBreach, the scale and persistence of the group’s recent campaigns suggest it remains an active and capable advanced persistent threat. 

Infy is considered one of the longest-operating APT groups, with its origins traced back to at least 2004. Despite this longevity, it has largely avoided the spotlight compared with other Iranian-linked groups such as Charming Kitten or MuddyWater. Earlier research attributed Infy’s attacks to a relatively focused toolkit built around two primary malware families: Foudre, a downloader and reconnaissance tool, and Tonnerre, a secondary implant used for deeper system compromise and data exfiltration. These tools are believed to be distributed primarily through phishing campaigns. 

Recent analysis from SafeBreach reveals a previously undocumented campaign targeting organizations and individuals across multiple regions, including Iran, Iraq, Turkey, India, Canada, and parts of Europe. The operation relies on updated versions of both Foudre and Tonnerre, with the most recent Tonnerre variant observed in September 2025. Researchers noted changes in initial infection methods, with attackers shifting away from traditional malicious macros toward embedding executables directly within Microsoft Excel documents to initiate malware deployment. 

One of the most distinctive aspects of Infy’s current operations is its resilient command-and-control infrastructure. The malware employs a domain generation algorithm to rotate C2 domains regularly, reducing the likelihood of takedowns. Each domain is authenticated using an RSA-based verification process, ensuring that compromised systems only communicate with attacker-approved servers. SafeBreach researchers observed that the malware retrieves encrypted signature files daily to validate the legitimacy of its C2 endpoints.

Further inspection of the group’s infrastructure uncovered structured directories used for domain verification, logging communications, and storing exfiltrated data. Evidence also suggests the presence of mechanisms designed to support malware updates, indicating ongoing development and maintenance of the toolset. 

The latest version of Tonnerre introduces another notable feature by integrating Telegram as part of its control framework. The malware is capable of interacting with a specific Telegram group through its C2 servers, allowing operators to issue commands and collect stolen data. Access to this functionality appears to be selectively enabled for certain victims, reinforcing the targeted nature of the campaign. 

SafeBreach researchers also identified multiple legacy malware variants associated with Infy’s earlier operations between 2017 and 2020, highlighting a pattern of continuous experimentation and adaptation. Contrary to assumptions that the group had gone dormant after 2022, the new findings indicate sustained activity and operational maturity over the past several years. 

The disclosure coincides with broader research into Iranian cyber operations, including analysis suggesting that some threat groups operate with structured workflows resembling formal government departments. Together, these findings reinforce concerns that Infy remains a persistent espionage threat with evolving technical capabilities and a long-term strategic focus.

CountLoader and GachiLoader Malware Campaigns Target Cracked Software Users

 

Cybersecurity analysts have uncovered a new malware campaign that relies on cracked software download platforms to distribute an updated variant of a stealthy and modular loader known as CountLoader. According to researchers from the Cyderes Howler Cell Threat Intelligence team, the operation uses CountLoader as the entry point in a layered attack designed to establish access, evade defenses, and deploy additional malicious payloads. 

CountLoader has been observed in real-world attacks since at least June 2025 and was previously analyzed by Fortinet and Silent Push. Earlier investigations documented its role in delivering widely used malicious tools such as Cobalt Strike, AdaptixC2, PureHVNC RAT, Amatera Stealer, and cryptomining malware. The latest iteration demonstrates further refinement, with attackers leveraging familiar piracy tactics to lure victims. 

The infection process begins when users attempt to download unauthorized copies of legitimate software, including productivity applications. Victims are redirected to file-hosting platforms where they retrieve a compressed archive containing a password-protected file and a document that supplies the password. Once extracted, the archive reveals a renamed but legitimate Python interpreter configured to run malicious commands. This component uses the Windows utility mshta.exe to fetch the latest version of CountLoader from a remote server.  

To maintain long-term access, the malware establishes persistence through a scheduled task designed to resemble a legitimate Google system process. This task is set to execute every 30 minutes over an extended period and relies on mshta.exe to communicate with fallback domains. CountLoader also checks for the presence of endpoint protection software, specifically CrowdStrike Falcon, adjusting its execution method to reduce the risk of detection if security tools are identified. 

Once active, CountLoader profiles the infected system and retrieves follow-on payloads. The newest version introduces additional capabilities, including spreading through removable USB drives and executing malicious code entirely in memory using mshta.exe or PowerShell. These enhancements allow attackers to minimize their on-disk footprint while increasing lateral movement opportunities. In incidents examined by Cyderes, the final payload delivered was ACR Stealer, a data-harvesting malware designed to extract sensitive information from compromised machines. 

Researchers noted that the campaign reflects a broader shift toward fileless execution and the abuse of trusted, signed binaries. This approach complicates detection and underscores the need for layered defenses and proactive threat monitoring as malware loaders continue to evolve.  

Alongside this activity, Check Point researchers revealed details of another emerging loader named GachiLoader, a heavily obfuscated JavaScript-based malware written in Node.js. This threat is distributed through the so-called YouTube Ghost Network, which consists of hijacked YouTube accounts used to promote malicious downloads. The campaign has been linked to dozens of compromised accounts and hundreds of thousands of video views before takedowns occurred. 

In some cases, GachiLoader has been used to deploy second-stage malware through advanced techniques involving Portable Executable injection and Vectored Exception Handling. The loader performs multiple anti-analysis checks, attempts to gain elevated privileges, and disables key Microsoft Defender components to avoid detection. Security experts say the sophistication displayed in these campaigns highlights the growing technical expertise of threat actors and reinforces the importance of continuously adapting defensive strategies.

OpenAI Warns Future AI Models Could Increase Cybersecurity Risks and Defenses

 

Meanwhile, OpenAI told the press that large language models will get to a level where future generations of these could pose a serious risk to cybersecurity. The company in its blog postingly admitted that powerful AI systems could eventually be used to craft sophisticated cyberattacks, such as developing previously unknown software vulnerabilities or aiding stealthy cyber-espionage operations against well-defended targets. Although this is still theoretical, OpenAI has underlined that the pace with which AI cyber-capability improvements are taking place demands proactive preparation. 

The same advances that could make future models attractive for malicious use, according to the company, also offer significant opportunities to strengthen cyber defense. OpenAI said such progress in reasoning, code analysis, and automation has the potential to significantly enhance security teams' ability to identify weaknesses in systems better, audit complex software systems, and remediate vulnerabilities more effectively. Instead of framing the issue as a threat alone, the company cast the issue as a dual-use challenge-one in which adequate management through safeguards and responsible deployment would be required. 

In the development of such advanced AI systems, OpenAI says it is investing heavily in defensive cybersecurity applications. This includes helping models improve particularly on tasks related to secure code review, vulnerability discovery, and patch validation. It also mentioned its effort on creating tooling supporting defenders in running critical workflows at scale, notably in environments where manual processes are slow or resource-intensive. 

OpenAI identified several technical strategies that it thinks are critical to the mitigation of cyber risk associated with increased capabilities of AI systems: stronger access controls to restrict who has access to sensitive features, hardened infrastructure to prevent abuse, outbound data controls to reduce the risk of information leakage, and continuous monitoring to detect anomalous behavior. These altogether are aimed at reducing the likelihood that advanced capabilities could be leveraged for harmful purposes. 

It also announced the forthcoming launch of a new program offering tiered access to additional cybersecurity-related AI capabilities. This is intended to ensure that researchers, enterprises, and security professionals working on legitimate defensive use cases have access to more advanced tooling while providing appropriate restrictions on higher-risk functionality. Specific timelines were not discussed by OpenAI, although it promised that more would be forthcoming very soon. 

Meanwhile, OpenAI also announced that it would create a Frontier Risk Council comprising renowned cybersecurity experts and industry practitioners. Its initial mandate will lie in assessing the cyber-related risks that come with frontier AI models. But this is expected to expand beyond this in the near future. Its members will be required to offer advice on the question of where the line should fall between developing capability responsibly and possible misuse. And its input would keep informing future safeguards and evaluation frameworks. 

OpenAI also emphasized that the risks of AI-enabled cyber misuse have no single-company or single-platform constraint. Any sophisticated model, across the industry, it said, may be misused if there are no proper controls. To that effect, OpenAI said it continues to collaborate with peers through initiatives such as the Frontier Model Forum, sharing threat modeling insights and best practices. 

By recognizing how AI capabilities could be weaponized and where the points of intervention may lie, the company believes, the industry will go a long way toward balancing innovation and security as AI systems continue to evolve.

Critical FreePBX Vulnerabilities Expose Authentication Bypass and Remote Code Execution Risks

 

Researchers at Horizon3.ai have uncovered several security vulnerabilities within FreePBX, an open-source private branch exchange platform. Among them, one severity flaw could be exploited to bypass authentication if very specific configurations are enabled. The issues were disclosed privately to FreePBX maintainers in mid-September 2025, and the researchers have raised concerns about the exposure of internet-facing PBX deployments.  

According to Horizon3.ai's analysis, the disclosed vulnerabilities affect several FreePBX core components and can be exploited by an attacker to achieve unauthorized access, manipulate databases, upload malicious files, and ultimately execute arbitrary commands. One of the most critical finding involves an authentication bypass weakness that could grant attackers access to the FreePBX Administrator Control Panel without needing valid credentials, given specific conditions. This vulnerability manifests itself in situations where the system's authorization mechanism is configured to trust the web server rather than FreePBX's own user management. 

Although the authentication bypass is not active in the default FreePBX configuration, it becomes exploitable with the addition of multiple advanced settings enabled. Once these are in place, an attacker can create HTTP requests that contain forged authorization headers as a way to provide administrative access. Researchers pointed out that such access can be used to add malicious users to internal database tables effectively to maintain control of the device. The behavior greatly resembles another FreePBX vulnerability disclosed in the past and that was being actively exploited during the first months of 2025.  

Besides the authentication bypass, Horizon3.ai found various SQL injection bugs that impact different endpoints within the platform. These bugs allow authenticated attackers to read from and write to the underlying database by modifying request parameters. Such access can leak call records, credentials, and system configuration data. The researchers also discovered an arbitrary file upload bug that can be exploited as part of having a valid session identifier, thus allowing attacks to upload a PHP-based web shell and use command execution against the underlying server. 

This can be used for extracting sensitive system files or establishing deeper persistence. Horizon3.ai noted that the vulnerabilities are fairly low-complexity to exploit and may enable remote code execution by both authenticated and unauthenticated attackers, depending on which endpoint is exposed and how the system is configured. It added that the PBX systems are an attractive target because such boxes are very exposed to the internet and also often integrated deeply into critical communications infrastructure. The FreePBX project has made patches available to address the issues across supported versions, beginning the rollout in incremental fashion between October and December 2025.

In light of the findings, the project also disabled the ability to configure authentication providers through the web interface and required administrators to configure this setting through command-line tools. Temporary mitigation guidance issued by those impacted encouraged users to transition to the user manager authentication method, limit overrides to advanced settings, and reboot impacted systems to kill potentially unauthorized sessions. Researchers and FreePBX maintainers have called on administrators to check their environments for compromise-especially in cases where the vulnerable authentication configuration was enabled. 

While several vulnerable code paths remain, they require security through additional authentication layers. Security experts underscored that, whenever possible, legacy authentication mechanisms should be avoided because they offer weaker protection against exploitation. The incident serves as a reminder of the importance of secure configuration practices, especially for systems that play a critical role in organizational communications.

Critical CVE-2025-66516 Exposes Apache Tika to XXE Attacks Across Core and Parser Modules

 

A newly disclosed vulnerability in Apache Tika has had the cybersecurity community seriously concerned because researchers have confirmed that it holds a maximum CVSS severity score of 10.0. Labeled as CVE-2025-66516, the vulnerability facilitates XXE attacks and may allow attackers to gain access to internal systems along with sensitive data by taking advantage of how Tika processes certain PDF files. 

Apache Tika is an open-source, highly-used framework for extracting text, metadata, and structured content from a wide array of file formats. It is commonly used within enterprise workflows including compliance systems, document ingestion pipelines, Elasticsearch and Apache Solr indexing, search engines, and automated content scanning processes. Because of its broad use, any severe issue within the platform has wide-ranging consequences.  

According to the advisory for the project, the vulnerability exists in several modules, such as tika-core, tika-parsers, and the tika-pdf-module, on different versions, from 1.13 to 3.2.1. The issue allows an attacker to embed malicious XFA -- a technology that enables XML Forms Architecture -- content inside PDF files. Upon processing, Tika may execute unwanted calls to embedded external XML entities, thus providing a way to fetch restricted files or gain access to internal resources.  

The advisory points out that CVE-2025-66516 concerns an issue that was previously disclosed as CVE-2025-54988, but its scope is considerably broader. Whereas the initial advisory indicated the bug was limited to the PDF parser, subsequent analysis indicated that the root cause of the bug-and therefore the fix-represented in tika-core, not solely its parser component. Consequently, any organization that has patched only the parser without updating tika-core to version 3.2.2 or newer remains vulnerable. 

Researchers also provided some clarification to note that earlier 1.x releases contained the vulnerable PDF parser in the tika-parsers module, so the number of affected systems is higher than initial reporting indicated. 

XXE vulnerabilities arise when software processes XML input without required restrictions, permitting an attacker to use external entities (these are references that can point to either remote URLs or local files). Successfully exploited, this can lead to unauthorized access, SSRF, disclosure of confidential files, or even an escalation of this attack chain into broader compromise. 

Project maintainers strongly recommend immediate updates for all deployments. As no temporary configuration workaround has been confirmed, one can only install patched versions.

Sha1-Hulud Malware Returns With Advanced npm Supply-Chain Attack Targeting Developers

 

A new wave of the Sha1-Hulud malware campaign has unfolded, indicating further exacerbation of supply-chain attacks against the software development ecosystem. The recent attacks have hit the Node Package Manager, or npm, one of the largest open-source package managers that supplies JavaScript developers around the world. Once the attackers compromise vulnerable packages within npm, the malicious code will automatically be executed whenever targeted developers update to vulnerable versions, oblivious to the fact. Current estimates indicate nearly 1,000 npm packages have been tampered with, thereby indirectly affecting tens of thousands of repositories. 

Sha1-Hulud first came into light in September 2025, when it staged its first significant intrusion into npm's ecosystem. The past campaign included the injection of trojanized code into weakly-secured open-source libraries that then infected every development environment that had the components installed. The malware from the initial attack was also encoded with a credential harvesting feature, along with a worm-like mechanism intended for the proliferation of infection. 

The latest rendition, seen in new activity, extends the attack vector and sophistication. Among others, it includes credential theft, self-propagation components, and a destructive "self-destruct" module that aims at deleting user data in case interference with the malware is detected. The malware now demonstrates wide platform compatibility, running across Linux, macOS, and Windows systems, and introduces abuse of GitHub Actions for remote code execution. 

The infection chain starts with a modified installation sequence. Inside the package.json file, the compromised npm packages bear a pre-install script named setup_bun.js. Posing as a legitimate installer for the Bun JavaScript runtime, the script drops a 10MB heavily obfuscated payload named bun_environment.js. From there, malware begins searching for tokens, API keys, GitHub credentials, and other sensitive authentication data. It leverages tools like TruffleHog to find more secrets. After stealing the data, it automatically gets uploaded into a public repository created under the victim's GitHub account, naming it "Sha1-Hulud: The Second Coming," thus making those files accessible not just to the attackers but to actually anyone publicly browsing the repository. 

The malware then uses the stolen npm authentication tokens to compromise new packages maintained by the victim. It injects the same malicious scripts into those packages and republishes them with updated version numbers, triggering automatic deployment across dependent systems. If the victim tries to block access or remove components, the destructive fail-safe is initiated, which wipes home directory files and overwrites data sectors-this significantly reduces the chances of data recovery. 

Security teams are encouraged to temporarily stop updating npm packages, conduct threat-hunting activities for the known IoCs, rotate credentials, and reevaluate controls on supply-chain risk. The researchers recommend treating any system showing signs of infection as completely compromised.

AI Poisoning: How Malicious Data Corrupts Large Language Models Like ChatGPT and Claude

 

Poisoning is a term often associated with the human body or the environment, but it is now a growing problem in the world of artificial intelligence. Large language models such as ChatGPT and Claude are particularly vulnerable to this emerging threat known as AI poisoning. A recent joint study conducted by the UK AI Security Institute, the Alan Turing Institute, and Anthropic revealed that inserting as few as 250 malicious files into a model’s training data can secretly corrupt its behavior. 

AI poisoning occurs when attackers intentionally feed false or misleading information into a model’s training process to alter its responses, bias its outputs, or insert hidden triggers. The goal is to compromise the model’s integrity without detection, leading it to generate incorrect or harmful results. This manipulation can take the form of data poisoning, which happens during the model’s training phase, or model poisoning, which occurs when the model itself is modified after training. Both forms overlap since poisoned data eventually influences the model’s overall behavior. 

A common example of a targeted poisoning attack is the backdoor method. In this scenario, attackers plant specific trigger words or phrases in the data—something that appears normal but activates malicious behavior when used later. For instance, a model could be programmed to respond insultingly to a question if it includes a hidden code word like “alimir123.” Such triggers remain invisible to regular users but can be exploited by those who planted them. 

Indirect attacks, on the other hand, aim to distort the model’s general understanding of topics by flooding its training sources with biased or false content. If attackers publish large amounts of misinformation online, such as false claims about medical treatments, the model may learn and reproduce those inaccuracies as fact. Research shows that even a tiny amount of poisoned data can cause major harm. 

In one experiment, replacing only 0.001% of the tokens in a medical dataset caused models to spread dangerous misinformation while still performing well in standard tests. Another demonstration, called PoisonGPT, showed how a compromised model could distribute false information convincingly while appearing trustworthy. These findings highlight how subtle manipulations can undermine AI reliability without immediate detection. Beyond misinformation, poisoning also poses cybersecurity threats. 

Compromised models could expose personal information, execute unauthorized actions, or be exploited for malicious purposes. Previous incidents, such as the temporary shutdown of ChatGPT in 2023 after a data exposure bug, demonstrate how fragile even the most secure systems can be when dealing with sensitive information. Interestingly, some digital artists have used data poisoning defensively to protect their work from being scraped by AI systems. 

By adding misleading signals to their content, they ensure that any model trained on it produces distorted outputs. This tactic highlights both the creative and destructive potential of data poisoning. The findings from the UK AI Security Institute, Alan Turing Institute, and Anthropic underline the vulnerability of even the most advanced AI models. 

As these systems continue to expand into everyday life, experts warn that maintaining the integrity of training data and ensuring transparency throughout the AI development process will be essential to protect users and prevent manipulation through AI poisoning.

Global Supply Chains at Risk as Indian Third-Party Suppliers Face Rising Cybersecurity Breaches

 

Global supply chains face growing cybersecurity risks as research highlights vulnerabilities in Indian third-party suppliers. According to a recent report by risk management firm SecurityScorecard, more than half of surveyed suppliers in India experienced breaches last year, raising concerns about cascading effects on international businesses. The study examined security postures across multiple sectors, including manufacturing for aerospace and pharmaceuticals, as well as IT service providers. 

The findings suggest that security weaknesses among Indian suppliers are both more widespread and severe than analysts initially anticipated. These vulnerabilities could create a domino effect, exposing global companies that rely on Indian vendors to significant cyber threats. Despite the generally strong security posture of Indian IT service providers, they recorded the highest number of breaches in the study, underscoring their position as prime targets for attackers. 

SecurityScorecard noted that IT service providers worldwide face heightened cyber risks due to their central role in enabling third-party access, their expansive attack surfaces, and their value as high-profile targets. In India, IT companies were found to be particularly vulnerable to typosquatting domains, compromised credentials, and infected devices. The research further revealed that suppliers of outsourced IT operations and managed services were linked to 62.5% of all documented third-party breaches in the country—the highest proportion the company has ever recorded. 

Given India’s dominant role in the global IT services market, the implications are profound. Multinational corporations across industries rely heavily on Indian IT vendors, making them critical nodes in the international digital economy. “India is a cornerstone of the global digital economy,” said Ryan Sherstobitoff, Field Chief Threat Intelligence Officer at SecurityScorecard. “Our findings highlight both strong performance and areas where resilience must improve. Supply chain security is now an operational requirement.” 

The report also emphasized the risks of “fourth-party” vulnerabilities, where the suppliers of Indian companies themselves create additional points of weakness. A single ransomware attack or disruptive incident against an Indian vendor, the researchers warned, could halt manufacturing, delay service delivery, or disrupt logistics across multiple countries. 

The risks are not limited to India. A separate SecurityScorecard study revealed that 96% of Europe’s largest financial institutions have been affected by a breach at a third-party supplier, while 97% reported breaches stemming from fourth-party partners, a sharp increase from 84% two years earlier. 

As global supply chains become increasingly interconnected, these findings highlight the urgent need for businesses to strengthen third-party risk management and enforce stricter cybersecurity practices across their vendor ecosystems. Without stronger safeguards, both direct and indirect supplier vulnerabilities could leave multinational enterprises exposed to significant financial and operational disruptions.

Chat Control Faces Resistance from VPN Industry Over Privacy Concerns


 

The European Union is poised at a decisive crossroads when it comes to shaping the future of digital privacy and is rapidly approaching a landmark ruling which will profoundly alter the way citizens communicate online. 

A final vote on October 14 is expected to take place on September 12, 2025, as Member States will be required to state their position on the proposed Child Sexual Abuse Regulation — commonly referred to as "Chat Control" — in advance of its final vote. Designed to combat the spread of child abuse content, the regulation would place an onus on the providers of messaging services such as WhatsApp, Signal, and iMessage to scan every private message sent between users, even those messages protected from being read by third parties. 

The supporters of the legislation argue that it is a necessary step for ensuring the safety of children, but critics argue that it would effectively legalise mass surveillance, thereby denying citizens access to secure communication and exposing their personal data to the possibility of being misused by government agents or exploited by malicious actors. 

Many observers warn that this vote will set a precedent that could have profound implications for the privacy and democratic freedoms of the continent as a whole if its outcome were to turn out favorably. 

The proposal is called “Chat Control” by its critics, since it requires all messaging platforms operating in Europe to actively scan user conversations, including those that are protected by end-to-end encryption, in search of child sexual abuse material that is well-known and previously unknown. 

In their opinion, such obligations threaten to undermine the very foundations of secure digital communication, creating the possibility of unprecedented levels of monitoring and abuse, which advocates argue could undermine the very foundations of secure digital communication.

The VPN Trust Initiative (VTI), an organisation which represents a group of major VPN providers, has been pushing back strongly against the draft regulation, stating that any attempt to weaken encryption would erode the very basis of the Internet's security. VTI co-chair, Emilija Beranskait, emphasised that "encryption either protects everybody or it doesn't," imploring governments to preserve strong encryption as a cornerstone of privacy, trust, and democratic values, urging them to adopt stronger encryption. 

According to NordVPN's privacy advocate, Laura Tyrylyte, while client-side scanning is indeed a safety and security concern, it is not an acceptable compromise between an organisation's safety and security, contending that solutions must not be compromised in the interest of addressing a single issue alone. 

Moreover, NymVPN's CEO, Harry Halpin, condemned the proposal as “a major step backwards for privacy” and warned that, once normalised, such surveillance tools could be used against journalists, activists, or political opponents. In addition, experts have raised significant technical concerns with the introduction of mandatory scanning mechanisms, stating that such mechanisms will fundamentally undermine the technology underlying online security. 

Moreover, they are concerned that client-side scanning infrastructure could be repurposed so that surveillance is widened far beyond what it was originally intended to do, which runs counter to the European Union's own commitments under initiatives such as the Cyber Resilience Act and efforts to prepare for quantum cryptography in the future. 

However, a deeply divided political debate is ongoing in the EU. Eight member states have formally opposed the proposal, including Germany and Luxembourg, while fifteen others, including France, Italy, and Spain, are still in favour of the proposal. 

There is still some uncertainty regarding the outcome of the October vote because only Estonia, Greece, and Romania have not decided. In addition to the pressure being put on the EU Council, more than 500 cryptography experts and researchers have signed an open letter urging it to reconsider the risks associated with introducing what they consider a dangerous precedent for the future of the digital world in Europe. 

It has been suggested that under the Danish-led proposal, messaging platforms such as WhatsApp, Signal, and ProtonMail would have to scan private communications without discrimination. In their current form, the proposal would violate end-to-end encryption in an irreparable way, according to experts. 

A direct analysis of links, photos, and videos is part of the system that will run directly on the users' devices before messages are encrypted. 

Only government and military accounts are exempt from this analysis, with the draft regulation last circulated to EU delegations on July 24, 2025, claiming to safeguard encryption. Still, privacy specialists are of the opinion that true security cannot be maintained using client-side scanning. 

Laura Tyrylyte, NordVPN's privacy advocate, observed that "Chat Control's client-side scanning provisions create a false choice between security and safety." The solution to one problem, even a serious one like child safety, cannot be at the expense of creating systemic vulnerabilities that are more dangerous to everyone." 

Several other industry leaders expressed similar concerns as well, including Harry Halpin, CEO of NymVPN, who condemned the measure as “a significant step backwards for privacy.” He explained that the indiscriminate scans of private communications are disproportionate in nature, creating a backdoor that could be exploited if it is normalised. 

There is a risk that such infrastructure could easily be redirected towards attacking journalists, political opponents, or activists while also exposing ordinary citizens to hostile cyberattacks. In Halpin's view and the opinion of others, it is more effective to carry out targeted, warrant-based investigations, to take down illegal material swiftly, and to use properly resourced specialist teams rather than universal surveillance as a means of detecting illegal activity. 

However, despite the simple concessions made in the latest draft, such as restricting the detection to visual contents and excluding audio and text, the scientific community has remained steadfast in its criticism regardless of the concessions made. 

The researchers point out that there are four critical flaws to the system: the inability to scan billions of messages accurately; the inevitable weakening of encryption through the monitoring of devices on-device; the high risk that surveillance can expand beyond its stated purpose due to "function creep"; and the danger that mass monitoring in the name of child protection will erode democratic norms. 

While the EU has promised oversight and consent mechanisms, cryptography experts claim that secure and reliable client-side scanning cannot be performed at scale, despite promises of EU oversight and consent mechanisms. This proposal, therefore, is technically flawed as well as politically perilous. 

VPN providers are also signalling that they will not stand on the sidelines if the regulation is passed. Several leading companies, including Mullvad, a popular privacy-focused service, have expressed concern about the possibility of withdrawing from the European market altogether if the proposed legislation is passed. 

If this happens, millions of users will be impacted, and innovation in this field may be curtailed. Similar advocacy groups, including Privacy Guides, have sounded the alarm in the past weeks, warning that the new regulations threaten to undermine the privacy of all citizens, not only those suspected of wrongdoing, and they urge all citizens to take notice before the September 12 deadline. 

A growing number of social media platforms are also being criticised, and voices like Telegram founder Pavel Durov have pointed out that comparable laws have failed in the past, as determined offenders have simply moved to smaller applications or VPNs to avoid these weaker protections, which leaves ordinary users to bear the brunt. 

The debate carries significant economic weight. The Security.org website indicates that more than 75 million Americans already use VPN services to keep their privacy online. As Chat Control advances, this demand is expected to grow rapidly in Europe. As per Future Market Insights, by 2035, the VPN industry is expected to grow to a value of $481.5 billion; however, experts caution that heavy regulation may fragment the market and stifle technological development.

Denmark has continued to lobby for the proposal despite mounting opposition from civil society groups, technology companies, and several member states as the EU Council prepares to vote on October 14, as tensions are increasing. In recent weeks, citizens have taken to online platforms such as X to voice their concerns about the proposed legislation, warning that Europeans would not have fundamentally secure digital privacy. 

Analysts point out that in order to adapt to this changing environment, VPN providers may need to use quantum-resistant technologies faster or explore decentralised models, as highlighted in recent forward-looking studies, which point to the existential stakes of the industry. 

However, one central fear remains across all debates: once surveillance infrastructure is embedded in the environment, its scope is unlikely to be limited to combating child abuse. In their view, it could create a framework for broad and permanent monitoring, reshaping the global norms of digital privacy in a way that undermines both the rights of users and technological innovation in the process. 

A key question to be answered before the EU's vote on October 14 is whether it can successfully balance child protection with its longstanding commitments to privacy and digital rights while maintaining a sense of security. 

It is noted that decisions made in Brussels will have a global impact, potentially setting global standards for how governments deal with encryption, surveillance, and online safety, as experts warn. For legislators, the challenge is to devise effective solutions that protect vulnerable groups without dismantling the secure infrastructures that rely on modern communication, commerce and civic participation. 

One possible path forward, according to observers, could be bolstering cross-border investigative collaboration, strengthening rapid takedown protocols for harmful material, and building specialised law enforcement units which are equipped with advanced tools that are able to target perpetrators rather than citizens collectively, to achieve a better outcome. 

In addition to the fact that private measures would prove better at combating criminal networks, privacy advocates argue that they would also preserve the trust and innovation that Europe has championed for decades, as well as the sense of security that Europe has promoted for decades. 

There will be a clear indication of the EU's global leadership position in safeguarding both child safety and civil liberties through this decision, or whether it will serve as a model for other nations to emulate in terms of surveillance frameworks to maintain secure neighbourhoods.