No connection

Search Results

Crypto Score 42 Bearish

AI Supply Chain Vulnerability: Malicious LLM Routers Targeting Crypto Credentials

Apr 13, 2026 02:47 UTC
ETH
Short term

Researchers have identified a critical security flaw in third-party AI routers that allows attackers to steal private keys and inject malicious code. The study warns that developers using AI agents for smart contract coding are particularly at risk.

  • 26 LLM routers identified as malicious or credential-stealing
  • TLS termination allows intermediaries to read private keys in plaintext
  • 9 routers injected malicious code; 17 accessed AWS credentials
  • Automatic execution settings ('YOLO mode') increase attack success rates
  • Experts recommend cryptographic signing of AI responses for verification

University of California researchers have uncovered a significant security breach within the AI supply chain, revealing that several third-party Large Language Model (LLM) routers are actively stealing credentials and injecting malicious tool calls. These routers, which aggregate access to providers such as OpenAI, Anthropic, and Google, act as intermediaries that terminate Transport Layer Security (TLS) connections. Because these routers have full plaintext access to every message, developers using AI coding agents to work on wallets or smart contracts may inadvertently pass private keys and seed phrases through unsecured infrastructure. The researchers tested 28 paid and 400 free routers, discovering that 26 were compromised in various ways. Specific findings include nine routers actively injecting malicious code and 17 accessing Amazon Web Services (AWS) credentials. In one instance, a router successfully drained Ether (ETH) from a researcher-owned private key. The study also highlighted the danger of 'YOLO mode,' a setting in AI frameworks that executes commands automatically without user confirmation, allowing malicious code to run silently. To mitigate these risks, researchers advise developers to bolster client-side defenses and strictly avoid transmitting private keys through AI agent sessions. The proposed long-term solution is for AI providers to cryptographically sign their responses, enabling agents to mathematically verify that instructions originate from the actual model rather than a malicious intermediary.

Sign up free to read the full analysis

Create a free account to unlock full AI-curated market articles, personalized alerts, and more.

Share this article

Related Articles

Stay Ahead of the Markets

Join thousands of traders using AI-powered market intelligence. Get personalized insights, real-time alerts, and advanced analysis tools.

Home
Terminal
AI
Markets
Profile