In September 2025, Anthropic disclosed a case that highlights a major evolution in cyber operations. A state-backed threat actor leveraged an AI-powered coding agent to conduct an automated cyber espionage campaign targeting 30 organizations globally. What stands out is the level of autonomy involved. The AI system independently handled approximately 80 to 90 percent of the tactical workload, including scanning targets, generating exploit code, and attempting lateral movement across systems at machine speed.
While this development is alarming, a more critical risk is emerging. Attackers may no longer need to progress through traditional stages of intrusion. Instead, they can compromise an AI agent already embedded within an organization’s environment. Such agents operate with pre-approved access, established permissions, and a legitimate role that allows them to move across systems as part of daily operations. This removes the need for attackers to build access step by step.
A Security Model Designed for Human Attackers
The widely used cyber kill chain framework, introduced by Lockheed Martin in 2011, was built on the assumption that attackers must gradually work their way into a system. It describes how adversaries move from an initial breach to achieving their final objective.
The model is based on a straightforward principle. Attackers must complete a sequence of steps, and defenders can interrupt them at any stage. Each step increases the likelihood of detection.
A typical attack path includes several phases. It begins with initial access, often achieved by exploiting a vulnerability. The attacker then establishes persistence while avoiding detection mechanisms. This is followed by reconnaissance to understand the system environment. Next comes lateral movement to reach valuable assets, along with privilege escalation when higher levels of access are required. The final stage involves data exfiltration while bypassing data loss prevention controls.
Each of these stages creates opportunities for detection. Endpoint security tools may identify the initial payload, network monitoring systems can detect unusual movement across systems, identity solutions may flag suspicious privilege escalation, and SIEM platforms can correlate anomalies across different environments.
Even advanced threat groups such as APT29 and LUCR-3 invest heavily in avoiding detection. They often spend weeks operating within systems, relying on legitimate tools and blending into normal traffic patterns. Despite these efforts, they still leave behind subtle indicators, including unusual login locations, irregular access behavior, and small deviations from established baselines. These traces are precisely what modern detection systems are designed to identify.
However, this model does not apply effectively to AI-driven activity.
What AI Agents Already Possess
AI agents function very differently from human users. They operate continuously, interact across multiple systems, and routinely move data between applications as part of their designed workflows. For example, an agent may pull data from Salesforce, send updates through Slack, synchronize files with Google Drive, and interact with ServiceNow systems.
Because of these responsibilities, such agents are often granted extensive permissions during deployment, sometimes including administrative-level access across multiple platforms. They also maintain detailed activity histories, which effectively act as a map of where data is stored and how it flows across systems.
If an attacker compromises such an agent, they immediately gain access to all of these capabilities. This includes visibility into the environment, access to connected systems, and permission to move data across platforms. Importantly, they also gain a legitimate operational cover, since the agent is expected to perform these actions.
As a result, the attacker bypasses every stage of the traditional kill chain. There is no need for reconnaissance, lateral movement, or privilege escalation in a detectable form, because the agent already performs these functions. In this scenario, the agent itself effectively becomes the entire attack chain.
Evidence That the Threat Is Already Looming
This risk is not theoretical. The OpenClaw incident provides a clear example. Investigations revealed that approximately 12 percent of the skills available in its public marketplace were malicious. In addition, a critical remote code execution vulnerability enabled attackers to compromise systems with minimal effort. More than 21,000 instances of the platform were found to be publicly exposed.
Once compromised, these agents were capable of accessing integrated services such as Slack and Google Workspace. This included retrieving messages, documents, and emails, while also maintaining persistent memory across sessions.
The primary challenge for defenders is that most security tools are designed to detect abnormal behavior. When attackers operate through an AI agent’s existing workflows, their actions appear normal. The agent continues accessing the same systems, transferring similar data, and operating within expected timeframes. This creates a significant detection gap.
How Visibility Solutions Address the Problem
Defending against this type of threat begins with visibility. Organizations must identify all AI agents operating within their environments, including embedded features, third-party integrations, and unauthorized shadow AI tools.
Solutions such as Reco are designed to address this challenge. These platforms can discover all AI agents interacting within a SaaS ecosystem and map how they connect across applications.
They provide detailed visibility into which systems each agent interacts with, what permissions it holds, and what data it can access. This includes visualizing SaaS-to-SaaS connections and identifying risky integration patterns, including those formed through MCP, OAuth, or API-based connections. These integrations can create “toxic combinations,” where agents unintentionally bridge systems in ways that no single application owner would normally approve.
Such tools also help identify high-risk agents by evaluating factors such as permission scope, cross-system access, and data sensitivity. Agents associated with increased risk are flagged, allowing organizations to prioritize mitigation.
In addition, these platforms support enforcing least-privilege access through identity and access governance controls. This limits the potential impact if an agent is compromised.
They also incorporate behavioral monitoring techniques, applying identity-centric analysis to AI agents in the same way as human users. This allows detection systems to distinguish between normal automated activity and suspicious deviations in real time.
What This Means for Security Teams
The traditional kill chain model is based on the assumption that attackers must gradually build access. AI agents fundamentally disrupt this assumption.
A single compromised agent can provide immediate access to systems, detailed knowledge of the environment, extensive permissions, and a legitimate channel for moving data. All of this can occur without triggering traditional indicators of compromise.
Security teams that focus only on detecting human attacker behavior risk overlooking this emerging threat. Attackers operating through AI agents can remain hidden within normal operational activity.
As AI adoption continues to expand, it is increasingly likely that such agents will become targets. In this context, visibility becomes critical. The ability to monitor AI agents and understand their behavior can determine whether a threat is identified early or only discovered during incident response.
Solutions like Reco aim to provide this visibility across SaaS environments, enabling organizations to detect and manage risks associated with AI-driven systems more effectively.
At the RE//verse 2026 event, security researcher Markus Gaasedelen introduced a technique called the “Bliss” double glitch. This method relies on manipulating electrical voltage at precise moments to interfere with the console’s startup process, effectively bypassing its built-in protections.
This marks the first known instance where the Xbox One’s hardware defenses have been broken in a way that others can replicate. The achievement is being compared to the Reset Glitch Hack that affected the Xbox 360, although this newer approach operates at a deeper level. Instead of targeting software vulnerabilities, it directly interferes with the boot ROM, a core component embedded in the console’s chip. By doing so, the exploit grants complete control over the system, including its most secure layers such as the hypervisor.
When the Xbox One was introduced in 2013, Microsoft designed it with an unusually strong security model. The system relied on multiple layers of encryption and authentication, linking firmware, the operating system, and game files into a tightly controlled verification chain. Within the company, it was even described as one of the most secure products Microsoft had ever built.
A substantial part of this design was its secure boot process. Unlike the Xbox 360, which was compromised through reset-line manipulation, the Xbox One removed such external entry points. It also incorporated a dedicated ARM-based security processor responsible for verifying every stage of the startup sequence. Without valid cryptographic signatures, no code was allowed to run. For many years, this approach appeared highly effective.
Rather than attacking these higher-level protections, the researcher focused on the physical behavior of the hardware itself. Traditional glitching techniques rely on disrupting timing signals, but the Xbox One’s architecture left little opportunity for that. Instead, the method used here involves voltage glitching, where the power supplied to the processor is briefly disrupted.
These momentary drops in voltage can cause the processor to behave unpredictably, such as skipping instructions or misreading operations. However, the timing must be extremely precise, as even a tiny variation can result in failure or system crashes.
To achieve this level of accuracy, specialized hardware tools were developed to monitor and control electrical signals within the system. This allowed the researcher to closely observe how the console behaves at the silicon level and identify the exact points where interference would be effective.
The resulting “Bliss” technique uses two carefully timed voltage disruptions during the startup process. The first interferes with memory protection mechanisms managed by the ARM Cortex subsystem. The second targets a memory-copy operation that occurs while the system is loading initial data. If both steps are executed correctly, the system is redirected to run code chosen by the attacker, effectively taking control of the boot process.
Unlike many modern exploits, this method does not depend on software flaws that can be corrected through updates. Instead, it targets the boot ROM, which is permanently embedded in the chip during manufacturing. Because this code cannot be modified, the vulnerability cannot be patched. As a result, the exploit allows unauthorized code execution across all system layers, including protected components.
With this level of access, it becomes possible to run alternative operating systems, extract encrypted firmware, and analyze internal system data. This has implications for both security research and digital preservation, as it enables deeper understanding of the console’s architecture and may support efforts to emulate its environment in the future.
Beyond research applications, the findings may also lead to practical tools. There is speculation that the technique could be adapted into hardware modifications similar to modchips, which automate the precise electrical conditions needed for the exploit. Such developments could revive longstanding debates around console modification and software control.
From a security perspective, the immediate impact on Microsoft may be limited, as the Xbox One is no longer the company’s latest platform. Newer systems have adopted updated security designs based on similar principles. However, the discovery serves a lesson for the industry: no system can be considered permanently secure, especially when attacks target the underlying hardware itself.
A significant shift has occurred in the strategic calculus behind destructive cyber operations in recent years, expanding beyond the confines of traditional critical infrastructures into lesser-noticed yet equally vital ecosystems underpinning modern economies.
State-aligned threat actors are increasingly focusing their efforts on organizations embedded within logistics and supply chain frameworks that support entire industries through their operational continuity. A single, well-placed intrusion at these junctions can have a far-reaching impact on interconnected networks, reverberating across multiple interconnected networks with minimal direct involvement.
Healthcare supply chains, however, stand out as especially vulnerable in this context. As central channels of delivery of care, medical technology companies, pharmaceutical distributors, and logistics companies operate as central hubs for the delivery of care, providing support for large healthcare networks.
The scale of these organizations, their interdependence, and their operational criticality make them high-value targets, which allows adversaries to inflict widespread damage indirectly, without exposing themselves to the immediate impact and consequences associated with attacking frontline healthcare organizations. It is against this backdrop that a less examined yet increasingly consequential risk is becoming increasingly evident one that is not related to adversaries' offensive tooling, but rather to the systems organizations use to orchestrate and secure their own environments.
As part of the evolving force multipliers role of device and endpoint management platforms, designed to provide centralized control, visibility, and resilience at scale, these platforms are now emerging as force multipliers. Several recent cyber incidents have provided urgency to this issue, including the recent incident involving Stryker Corporation, where an intrusion into its Microsoft-based environment caused rapid operational disruptions across the company's global footprint.
In response to the company's disclosure of the breach approximately a week later, the Cybersecurity and Infrastructure Security Agency issued a formal alert stating that malicious activity was targeting endpoint management systems within U.S. organizations.
A broader investigation was initiated after the Stryker event triggered it. Through coordination with the Federal Bureau of Investigation, the agency has undertaken efforts to determine the scope of the threat and identify potential affected entities. As illustrated in mid-March, such access can provide a systemic leverage.
An incident occurred on March 11, 2019, causing Stryker's order processing functions to be interrupted, its manufacturing throughput to be restricted, and outbound shipments to be delayed. These effects are consistent with interference at the management level as opposed to a single, isolated system compromise.
The subsequent reporting indicated the incident may have involved the wiping of about 200,000 managed devices as well as the exfiltration of approximately 50 terabytes of data, indicating that both destructive and intelligence-gathering objectives were involved.
A later claim of responsibility was made by Handala, which described the operation as retaliatory in nature after a strike in southern Iran, emphasizing the growing intersection between geopolitical signaling and supply chain disruption in contemporary cyber campaigns.
During the course of the incident, it became increasingly evident that such a compromise would have practical consequences. Several key operational capabilities, including order processing, manufacturing execution, and distribution, were lost as a result of the intrusion, effectively limiting Stryker Corporation's ability to service demand across a globally distributed network. As a result of this disruption, traceable to Microsoft's environment, supply chain processes were immediately slowed down, creating bottlenecks beyond internal systems that led to downstream delivery commitments.
Consequently, the organization initiated its incident response protocol, undertaking containment and forensic analysis, assisted by external cybersecurity specialists, in order to determine the scope, entry vectors, and persistence mechanisms of the incident. Observations from industry observers indicate that Microsoft Intune may be misused as an integral part of a network attack chain, based on preliminary assessments.
Apparently, Lucie Cardiet of Vectra AI has found that threat actors may have exploited the platform's legitimate administration capabilities to remotely wipe managed endpoints, triggering large-scale factory resets on corporate laptops and mobile devices. The implementation of such an approach is technically straightforward, but operationally disruptive at scale, particularly in environments where endpoint integrity is a primary component of production systems and logistics operations.
As a result of these device resets, widespread reconfiguration efforts were necessary, interrupting the availability of inventory management systems, production scheduling platforms, and coordination tools crucial to ensuring supply continuity.
Applied cumulatively, these disruptions delayed manufacturing cycles and affected the timely processing and fulfillment of orders across multiple facilities, demonstrating the rapid occurrence of tangible operational paralysis that can be caused by control-plane compromises. There is evidence from the incident that the pattern of advanced enterprise intrusions is increasingly characterized by the convergence of compromised privileged identities, trusted management infrastructure, and intentional misuse of administrative functions, resulting in disruption of the enterprise.
In the field of security, this alignment is often referred to as a "lethal trifecta," a technique that enables adversaries to inflict systemic damage without using conventional malware techniques. According to investigators, Stryker Corporation was compromised as a result of an intrusion centered on administrative access to its Microsoft Identity and Device Management stack, allowing attackers to utilize enterprise-approved tools in their operations.
Intune platforms, such as Microsoft's, which provide centralized control over device fleets, are naturally equipped with high-impact capabilities. These capabilities can range from the enforcement of policies to the provision of remote wipe functions that can be repurposed into mechanisms for disruption if commandeered.
Employees have been abruptly locked out of corporate systems across geographical boundaries, suggesting that administrative actions have been coordinated. This is consistent with "living off the land" techniques that exploit native enterprise controls in order to avoid detection and maximize operational consequences. It is evident that the scale of disruption underscores the structural dependence that is inherent within the global healthcare supply chain.
Stryker, one of the most prominent companies in the sector, operates in dozens of countries and employs tens of thousands of people. In the event that internal systems underlying manufacturing and order fulfillment were rendered inaccessible, the effects spread rapidly across the organization's international operations.
Many facilities, including major hubs in Ireland, reported experiencing widespread downtime, with employees being unable to access company network services. In spite of the fact that the company stated that its medical devices continued to function safely in clinical settings due to their segregation from affected corporate systems, the incident nevertheless highlights the fragility of interconnected supply chains.
Medical technology providers serve as critical intermediaries and disruptions at this level can have an adverse effect on distributors, healthcare providers, and ultimately the timeline for delivering patient care. On a technical level, the breach indicates that attacker priorities have shifted from endpoint compromise to identity dominance.
Identity-centric operations are increasingly replacing traditional intrusion models, which typically involve malware deployment, lateral movement, and persistence mechanisms. These adversaries use credential, authentication token, or privileged session vulnerabilities to gain control over the enterprise control planes.
After being embedded within identity infrastructure, attackers are able to interact with administrative portals, SaaS management consoles, and device orchestration platforms as if they were legitimate operators. Because actions are executed through trusted channels, malicious activity is significantly less visible. It is therefore important to note that the extent to which the attackers have affected the network is determined by the scope of privileges that the compromised identities possess.
Additionally, it is evident that the attacker's intent has shifted from financial extortion to outright disruption. Although ransomware continues to dominate the threat landscape, these incidents are more closely associated with destructive operations, which are aimed at disabling systems and degrading functionality rather than extracting payment.
In light of the reported scale of device resets and data exfiltration, it appears the campaign was intended to disrupt operational continuity, echoing tactics employed in previous wiper-style attacks often associated with state-aligned actors. Operations of this type are often designed to disrupt organizations for maximum disruption, rather than to maximize financial gain, and are frequently deployed to signal strategic intent.
As evidenced by the attribution claims surrounding the incident, the group Handala defined the operation within the framework of broader geopolitical tensions, indicating that it was aimed at retaliation. Even if such claims are not capable of being fully attributed to such entities, the narrative is consistent with an observation that private sector entities - particularly those involved in critical supply chains - are increasingly at risk of state-linked cyber activity.
Cyberspace geopolitical contestation is no longer confined to peripheral targets, but encompasses integral elements of healthcare, manufacturing, and logistics. A recalibration of enterprise security priorities is particularly necessary in environments in which identity systems and management platforms serve as the operational backbone. These events emphasize the need to refocus enterprise security priorities.
The tactics that are employed today are increasingly misaligned with defenses centered around endpoint detection and malware prevention. Organizations must instead adopt a security posture that focuses on identity-centric risk management, enforcing strict privilege governance, performing continuous authentication validation, and monitoring administrative actions across control planes at the granular level.
Additionally, it is crucial that enterprise management tools themselves be hardened, ensuring that high impact functions such as remote wipe, policy enforcement, and system-wide configuration changes are subject to layered authorization controls and real-time anomaly detection. For industries embedded in critical supply chains, resilience planning extends to the capability of sustaining operations when control-plane disruptions occur, as well as the prevention of intrusions.
Ultimately, Stryker's incident serves as a reminder that in modern enterprise settings, the most trusted of systems can inadvertently turn into the most damaging failure points-and their secure operation requires a degree of scrutiny commensurate with their impact. It can also be argued that the Stryker incident provides a useful illustration of how modern cyber operations can transcend isolated breaches into instruments that can cause widespread disruptions throughout global networks.
Opening a project in a code editor is supposed to be routine. In this case, it is enough to trigger a full malware infection.
Security researchers have linked an ongoing campaign associated with North Korean actors, tracked as Contagious Interview or WaterPlum, to a malware family known as StoatWaffle. Instead of relying on software vulnerabilities, the group is embedding malicious logic directly into Microsoft Visual Studio Code (VS Code) projects, turning a trusted development tool into the starting point of an attack.
The entire mechanism is hidden inside a file developers rarely question: tasks.json. This file is typically used to automate workflows. In these attacks, it has been configured with a setting that forces execution the moment a project folder is opened. No manual action is required beyond opening the workspace.
Research from NTT Security shows that the embedded task connects to an external web application, previously hosted on Vercel, to retrieve additional data. The same task operates consistently regardless of the operating system, meaning the behavior does not change between environments even though most observed cases involve Windows systems.
Once triggered, the malware checks whether Node.js is installed. If it is not present, it downloads and installs it from official sources. This ensures the system can execute the rest of the attack chain without interruption.
What follows is a staged infection process. A downloader repeatedly contacts a remote server to fetch additional payloads. Each stage behaves in the same way, reaching out to new endpoints and executing the returned code as Node.js scripts. This creates a recursive chain where one payload continuously pulls in the next.
StoatWaffle is built as a modular framework. One component is designed for data theft, extracting saved credentials and browser extension data from Chromium-based browsers and Mozilla Firefox. On macOS systems, it also targets the iCloud Keychain database. The collected information is then sent to a command-and-control server.
A second module functions as a remote access trojan, allowing attackers to operate the infected system. It supports commands to navigate directories, list and search files, execute scripts, upload data, run shell commands, and terminate itself when required.
Researchers note that the malware is not static. The operators are actively refining it, introducing new variants and updating existing functionality.
The VS Code-based delivery method is only one part of a broader campaign aimed at developers and the open-source ecosystem. In one instance, attackers distributed malicious npm packages carrying a Python-based backdoor called PylangGhost, marking its first known propagation through npm.
Another campaign, known as PolinRider, involved injecting obfuscated JavaScript into hundreds of public GitHub repositories. That code ultimately led to the deployment of an updated version of BeaverTail, a malware strain already linked to the same threat activity.
A more targeted compromise affected four repositories within the Neutralinojs GitHub organization. Attackers gained access by hijacking a contributor account with elevated permissions and force-pushed malicious code. This code retrieved encrypted payloads hidden within blockchain transactions across networks such as Tron, Aptos, and Binance Smart Chain, which were then used to download and execute BeaverTail. Victims are believed to have been exposed through malicious VS Code extensions or compromised npm packages.
According to analysis from Microsoft, the initial compromise often begins with social engineering rather than technical exploitation. Attackers stage convincing recruitment processes that closely resemble legitimate technical interviews. Targets are instructed to run code hosted on platforms such as GitHub, GitLab, or Bitbucket, unknowingly executing malicious components as part of the assessment.
The individuals targeted are typically experienced professionals, including founders, CTOs, and senior engineers in cryptocurrency and Web3 sectors. Their level of access to infrastructure and digital assets makes them especially valuable. In one recent case, attackers unsuccessfully attempted to compromise the founder of AllSecure.io using this approach.
Multiple malware families are used across these attack chains, including OtterCookie, InvisibleFerret, and FlexibleFerret. InvisibleFerret is commonly delivered through BeaverTail, although recent intrusions show it being deployed after initial access is established through OtterCookie. FlexibleFerret, also known as WeaselStore, exists in both Go and Python variants, referred to as GolangGhost and PylangGhost.
The attackers continue to adjust their techniques. Newer versions of the malicious VS Code projects have moved away from earlier infrastructure and now rely on scripts hosted on GitHub Gist to retrieve additional payloads. These ultimately lead to the deployment of FlexibleFerret. The infected projects themselves are distributed through GitHub repositories.
Security analysts warn that placing malware inside tools developers already trust significantly lowers suspicion. When the code is presented as part of a hiring task or technical assessment, it is more likely to be executed, especially under time pressure.
Microsoft has responded to the misuse of VS Code tasks with security updates. In the January 2026 release (version 1.109), a new setting disables automatic task execution by default, preventing tasks defined in tasks.json from running without user awareness. This setting cannot be overridden at the workspace level, limiting the ability of malicious repositories to bypass protections.
Additional safeguards were introduced in February 2026 (version 1.110), including a second prompt that alerts users when an auto-run task is detected after workspace trust is granted.
Beyond development environments, North Korean-linked operations have expanded into broader social engineering campaigns targeting cryptocurrency professionals. These include outreach through LinkedIn, impersonation of venture capital firms, and fake video conferencing links. Some attacks lead to deceptive CAPTCHA pages that trick victims into executing hidden commands in their terminal, enabling cross-platform infections on macOS and Windows. These activities overlap with clusters tracked as GhostCall and UNC1069.
Separately, the U.S. Department of Justice has taken action against individuals involved in supporting North Korea’s fraudulent IT worker operations. Audricus Phagnasay, Jason Salazar, and Alexander Paul Travis were sentenced after pleading guilty in November 2025. Two received probation and fines, while one was sentenced to prison and ordered to forfeit more than $193,000 obtained through identity misuse.
Officials stated that such schemes enable North Korean operatives to generate revenue, access corporate systems, steal proprietary data, and support broader cyber operations. Separate research from Flare and IBM X-Force indicates that individuals involved in these programs undergo rigorous training and are considered highly skilled, forming a key part of the country’s strategic cyber efforts.
What this means
This attack does not depend on exploiting a flaw in software. It depends on exploiting trust.
By embedding malicious behavior into tools, workflows, and hiring processes that developers rely on every day, attackers are shifting the point of compromise. In this environment, opening a project can be just as risky as running an unknown program.
A recent investigation by Check Point Research has uncovered a surge in cyberattacks targeting Qatar, orchestrated by China-linked threat actors such as the Camaro Dragon group. These campaigns are cleverly disguised as breaking news related to escalating tensions in the Middle East, allowing attackers to lure unsuspecting victims.