More than 30 security weaknesses in various AI-powered IDEs have recently been uncovered, raising concerns as to how emerging automated development tools might unintentionally expose sensitive data or enable remote code execution. A collective set of vulnerabilities, referred to as IDEsaster, was termed by security researcher Ari Marzouk (MaccariTA), who found that such popular tools and extensions as Cursor, Windsurf, Zed.dev, Roo Code, GitHub Copilot, Claude Code, and others were vulnerable to attack chains leveraging prompt injection and built-in functionalities of the IDEs. At least 24 of them have already received a CVE identifier, which speaks to their criticality.
However, the most surprising takeaway, according to Marzouk, is how consistently the same attack patterns could be replicated across every AI IDE they examined. Most AI-assisted coding platforms, the researcher said, don't consider the underlying IDE tools within their security boundaries but rather treat long-standing features as inherently safe. But once autonomous AI agents can trigger them without user approval, the same trusted functions can be repurposed for leaking data or executing malicious commands.
Generally, the core of each exploit chain starts with prompt injection techniques that allow an attacker to redirect the large language model's context and behavior. Once the context is compromised, an AI agent might automatically execute instructions, such as reading files, modifying configuration settings, or writing new data, without the explicit consent of the user. Various documented cases showed how these capabilities could eventually lead to sensitive information disclosure or full remote code execution on a developer's system. Some vulnerabilities relied on workspaces being configured for automatic approval of file writes; thus, in practice, an attacker influencing a prompt could trigger code-altering actions without any human interaction.
Researchers also pointed out that prompt injection vectors may be obfuscated in non-obvious ways, such as invisible Unicode characters, poisoned context originating from Model Context Protocol servers, or malicious file references added by developers who may not suspect a thing. Wider concerns emerged when new weaknesses were identified in widely deployed AI development tools from major companies including OpenAI, Google, and GitHub.
As autonomous coding agents see continued adoption in the enterprise, experts warn these findings demonstrate how AI tools significantly expand the attack surface of development workflows. Rein Daelman, a researcher at Aikido, said any repository leveraging AI for automation tasks-from pull request labeling to code recommendations-may be vulnerable to compromise, data theft, or supply chain manipulation. Marzouk added that the industry needs to adopt what he calls Secure for AI, meaning systems are designed with intentionality to resist the emerging risks tied to AI-powered automation, rather than predicated on software security assumptions.