No connection

Search Results

Crypto Score 55 Bearish

AI Agent Routers Identified as Vectors for Cryptocurrency Theft

Apr 13, 2026 02:47 UTC
ETH
Short term

University of California researchers have uncovered a critical security flaw in third-party LLM routers that allows for the theft of private keys and credentials. The findings highlight the dangers of unverified intermediary infrastructure in the AI supply chain.

  • 26 LLM routers found stealing credentials and injecting code
  • Plaintext access at TLS termination points enables data theft
  • YOLO mode enables automatic execution of malicious commands
  • 17 routers successfully accessed AWS credentials during testing
  • Experts urge the adoption of cryptographic response signing

University of California researchers have identified a significant security vulnerability within the AI supply chain, revealing that several third-party Large Language Model (LLM) routers are being used to steal sensitive credentials and cryptocurrency. The study warns that these intermediaries can secretly inject malicious tool calls to compromise user data. As developers increasingly utilize AI agents to write smart contracts and manage digital wallets, they often route requests through third-party API intermediaries to aggregate access to models from providers such as OpenAI, Anthropic, and Google. However, because these routers terminate Transport Layer Security (TLS) connections, they maintain full plaintext access to every message, including highly sensitive private keys and seed phrases. In a comprehensive test of 28 paid and 400 free routers, researchers found 26 were actively malicious. The data revealed that nine routers injected malicious code, 17 accessed researcher-owned Amazon Web Services (AWS) credentials, and one successfully drained Ether (ETH) from a private key. While the financial loss in the experiment was nominal—under $50—the systemic risk to developers is substantial. The risk is further amplified by a feature known as 'YOLO mode,' where AI agents execute commands automatically without requiring user confirmation. This allows previously benign routers to be weaponized silently, while free routers may use low-cost API access as a lure to attract users and steal credentials. To mitigate these risks, researchers advise developers to bolster client-side defenses and ensure that private keys never transit through an AI agent session. The proposed long-term solution is for AI providers to cryptographically sign their responses, allowing agents to mathematically verify that instructions originate from the actual model rather than a malicious intermediary.

Sign up free to read the full analysis

Create a free account to unlock full AI-curated market articles, personalized alerts, and more.

Share this article

Related Articles

Stay Ahead of the Markets

Join thousands of traders using AI-powered market intelligence. Get personalized insights, real-time alerts, and advanced analysis tools.

Home
Terminal
AI
Markets
Profile