Hackers Can Introduce Malicious Code Using AI Code Editors With A New” Rules File Backdoor” Attack

April 18, 2025Ravie LakshmananAI Security / Software Security

Researchers in cybersecurity have provided details on a new supply chain attack vector known as Rules File Backdoor that affects GitHub Copilot and Cursor’s artificial intelligence ( AI ) powered code editors, causing them to inject malicious code.

In a complex statement shared with The Hacker News, Pillar security’s Co-Founder and CTO Ziv Karliner stated that” this technique enables hackers to discreetly deal AI-generated code by inserting hidden harmful instructions into apparently innocent configuration files used by Cursor and GitHub Copilot.”

Concern actors can change the AI to insert malicious code that bypasses conventional code reviews by exploiting hidden binary characters and powerful evasion strategies in the model facing instruction payload.

The attack vector is notable for allowing malicious code to spread across projects without apparent warning, which poses a risk to the supply chain.

The key to the attack rests on the that AI agents use to regulate their behavior and aid in the definition of best coding practices and project architecture.

In particular, it involves embedding expertly crafted prompts within appear to be benign rule files, which will result in the AI tool producing code that contains security flaws or backdoors. In other words, the poisoned regulations nudge the AI toward writing nefarious code.

To conceal malicious instructions, using zero-width joiners, bidirectional text markers, and other invisible characters to conceal malicious instructions and utilizing the AI’s ability to interpret natural language to create vulnerable code using semantic patterns that deceive the model into overriding ethical and safety constraints, can be done in this way.

Users are responsible for reviewing and accepting suggestions made by the tools, according to responsible disclosures made in late February and early March 2024 by both Cursor and GiHub.

” Rules File Backdoor” poses a significant risk by using the AI itself as a potential attack vector, turning the developer’s most trusted assistant into an unwitting accomplice, potentially affecting millions of end users through compromised software, according to Karliner.

” All future code-generation sessions by team members are affected once a poisoned rule file is incorporated into a project repository. Additionally, malicious instructions frequently survive project forking, making them a target for supply chain attacks that can affect end users and downstream dependencies.

I found this article to be interesting. To read more exclusive content we post, follow us on and Twitter.

Leave a Comment