Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label kill chain. Show all posts

AI Agents Are Reshaping Cyber Threats, Making Traditional Kill Chains Less Relevant

 



In September 2025, Anthropic disclosed a case that highlights a major evolution in cyber operations. A state-backed threat actor leveraged an AI-powered coding agent to conduct an automated cyber espionage campaign targeting 30 organizations globally. What stands out is the level of autonomy involved. The AI system independently handled approximately 80 to 90 percent of the tactical workload, including scanning targets, generating exploit code, and attempting lateral movement across systems at machine speed.

While this development is alarming, a more critical risk is emerging. Attackers may no longer need to progress through traditional stages of intrusion. Instead, they can compromise an AI agent already embedded within an organization’s environment. Such agents operate with pre-approved access, established permissions, and a legitimate role that allows them to move across systems as part of daily operations. This removes the need for attackers to build access step by step.


A Security Model Designed for Human Attackers

The widely used cyber kill chain framework, introduced by Lockheed Martin in 2011, was built on the assumption that attackers must gradually work their way into a system. It describes how adversaries move from an initial breach to achieving their final objective.

The model is based on a straightforward principle. Attackers must complete a sequence of steps, and defenders can interrupt them at any stage. Each step increases the likelihood of detection.

A typical attack path includes several phases. It begins with initial access, often achieved by exploiting a vulnerability. The attacker then establishes persistence while avoiding detection mechanisms. This is followed by reconnaissance to understand the system environment. Next comes lateral movement to reach valuable assets, along with privilege escalation when higher levels of access are required. The final stage involves data exfiltration while bypassing data loss prevention controls.

Each of these stages creates opportunities for detection. Endpoint security tools may identify the initial payload, network monitoring systems can detect unusual movement across systems, identity solutions may flag suspicious privilege escalation, and SIEM platforms can correlate anomalies across different environments.

Even advanced threat groups such as APT29 and LUCR-3 invest heavily in avoiding detection. They often spend weeks operating within systems, relying on legitimate tools and blending into normal traffic patterns. Despite these efforts, they still leave behind subtle indicators, including unusual login locations, irregular access behavior, and small deviations from established baselines. These traces are precisely what modern detection systems are designed to identify.

However, this model does not apply effectively to AI-driven activity.


What AI Agents Already Possess

AI agents function very differently from human users. They operate continuously, interact across multiple systems, and routinely move data between applications as part of their designed workflows. For example, an agent may pull data from Salesforce, send updates through Slack, synchronize files with Google Drive, and interact with ServiceNow systems.

Because of these responsibilities, such agents are often granted extensive permissions during deployment, sometimes including administrative-level access across multiple platforms. They also maintain detailed activity histories, which effectively act as a map of where data is stored and how it flows across systems.

If an attacker compromises such an agent, they immediately gain access to all of these capabilities. This includes visibility into the environment, access to connected systems, and permission to move data across platforms. Importantly, they also gain a legitimate operational cover, since the agent is expected to perform these actions.

As a result, the attacker bypasses every stage of the traditional kill chain. There is no need for reconnaissance, lateral movement, or privilege escalation in a detectable form, because the agent already performs these functions. In this scenario, the agent itself effectively becomes the entire attack chain.


Evidence That the Threat Is Already Looming 

This risk is not theoretical. The OpenClaw incident provides a clear example. Investigations revealed that approximately 12 percent of the skills available in its public marketplace were malicious. In addition, a critical remote code execution vulnerability enabled attackers to compromise systems with minimal effort. More than 21,000 instances of the platform were found to be publicly exposed.

Once compromised, these agents were capable of accessing integrated services such as Slack and Google Workspace. This included retrieving messages, documents, and emails, while also maintaining persistent memory across sessions.

The primary challenge for defenders is that most security tools are designed to detect abnormal behavior. When attackers operate through an AI agent’s existing workflows, their actions appear normal. The agent continues accessing the same systems, transferring similar data, and operating within expected timeframes. This creates a significant detection gap.


How Visibility Solutions Address the Problem

Defending against this type of threat begins with visibility. Organizations must identify all AI agents operating within their environments, including embedded features, third-party integrations, and unauthorized shadow AI tools.

Solutions such as Reco are designed to address this challenge. These platforms can discover all AI agents interacting within a SaaS ecosystem and map how they connect across applications.

They provide detailed visibility into which systems each agent interacts with, what permissions it holds, and what data it can access. This includes visualizing SaaS-to-SaaS connections and identifying risky integration patterns, including those formed through MCP, OAuth, or API-based connections. These integrations can create “toxic combinations,” where agents unintentionally bridge systems in ways that no single application owner would normally approve.

Such tools also help identify high-risk agents by evaluating factors such as permission scope, cross-system access, and data sensitivity. Agents associated with increased risk are flagged, allowing organizations to prioritize mitigation.

In addition, these platforms support enforcing least-privilege access through identity and access governance controls. This limits the potential impact if an agent is compromised.

They also incorporate behavioral monitoring techniques, applying identity-centric analysis to AI agents in the same way as human users. This allows detection systems to distinguish between normal automated activity and suspicious deviations in real time.


What This Means for Security Teams

The traditional kill chain model is based on the assumption that attackers must gradually build access. AI agents fundamentally disrupt this assumption.

A single compromised agent can provide immediate access to systems, detailed knowledge of the environment, extensive permissions, and a legitimate channel for moving data. All of this can occur without triggering traditional indicators of compromise.

Security teams that focus only on detecting human attacker behavior risk overlooking this emerging threat. Attackers operating through AI agents can remain hidden within normal operational activity.

As AI adoption continues to expand, it is increasingly likely that such agents will become targets. In this context, visibility becomes critical. The ability to monitor AI agents and understand their behavior can determine whether a threat is identified early or only discovered during incident response.

Solutions like Reco aim to provide this visibility across SaaS environments, enabling organizations to detect and manage risks associated with AI-driven systems more effectively.

Barclays Introduces New Step-by-Step Model to Tackle Modern Fraud

 


Banks and shops are facing more advanced types of fraud that mix online tricks with real-world scams. To fight back, experts from Barclays and a security company called Threat Fabric have created a detailed model to understand how these frauds work from start to finish. This system is called a fraud kill chain, and it helps organizations break down and respond to fraud at every stage.


What Is a Kill Chain?

The kill chain idea originally came from the military. It was used to describe each step of an attack so it could be stopped in time. In 2011, cybersecurity experts started using it to map out how hackers attack computer systems. This helped security teams block online threats like viruses, phishing emails, and ransomware.

But fraud doesn’t always follow the same patterns as hacking. It often includes human error, emotional tricks, and real-life actions. That’s why banks like Barclays needed a different version of the kill chain made specifically for financial fraud.


Why Fraud Needs a New Framework

Barclays noticed a new type of scam using tap-to-pay systems—also known as NFC, or near-field communication. This technology lets people pay by simply tapping their cards or phones. Criminals found ways to misuse this by copying the signals and using them without permission.

When Barclays and Threat Fabric studied these scams, they realized that the NFC trick was just one part of a larger process. There were many steps before and after it. But there was no clear way for banks and retailers to explain or share all this information. So, they created a new model to organize it all.


How the Fraud Kill Chain Works

The new fraud kill chain has ten steps. It starts with the fraudsters gathering data about victims and moves through stages like emotional manipulation, fake messages, stealing passwords, getting into accounts, and finally taking and hiding the money. Each of these steps includes different tricks and techniques.

For example, a scam might begin with a fake text message asking the victim to click a link. Once the victim enters their details, criminals can add their card to a device and make payments from far away. This kind of attack is sometimes called a ghost tap.


Retailers Use Their Own Version

Retail companies like Target are also building similar models. They’ve found that even simple scams, like messing with gift cards, involve many people and actions. Without a clear way to describe each part, it's hard for teams to stop them in time.

By using a structured approach to fraud, companies can better understand how scams happen, spot weak points, and stop future attacks. This new model helps everyone speak the same language when it comes to stopping fraud—and protects people from losing their money.