Crypto

UC researchers warn third-Party AI routers are stealing crypto and private keys

7Views



Third-party AI routing services are exposing users to significant security flaws that could result in the theft of cryptocurrency and cloud credentials.

Summary

  • Researchers found that 26 third-party LLM routers are actively injecting malicious code and stealing credentials by exploiting their access to plaintext data.
  • The study revealed that intermediaries can intercept private keys and cloud credentials because they terminate secure encryption to aggregate AI requests.

According to a paper published on Thursday by University of California researchers, the supply chain for Large Language Models (LLM) contains several vulnerabilities that allow for malicious code injection and credential extraction. 

These intermediaries, which developers use to manage access to providers like Google or OpenAI, essentially act as a “middleman” that terminates secure encryption. 

Because they have full plaintext access to every message sent through them, sensitive data like seed phrases or private keys can be intercepted by unverified infrastructure.

The researchers tested 400 free and 28 paid routers to measure the extent of these risks. Nine of these services actively injected malicious code, while 17 separate routers were caught accessing Amazon Web Services credentials owned by the team. 

During the experiment, one router successfully drained Ether from a decoy wallet after the researchers provided a prefunded private key. 

Although the team kept the balances low to ensure the total loss remained under $50, the result confirmed how easily a compromised intermediary can siphon funds.

“26 LLM routers are secretly injecting malicious tool calls and stealing creds,” co-author Chaofan Shou stated on X.

Identifying a malicious router is a difficult task for the average user. The researchers noted that because these services must read data to forward it, there is no visible difference between legitimate handling and active theft

The danger increases when developers enable “YOLO mode,” a setting in many AI frameworks that lets an agent execute commands automatically without a human confirming the action. 

This allows an attacker to send instructions that the user’s system will run instantly, often without the operator’s knowledge.

“The boundary between ‘credential handling’ and ‘credential theft’ is invisible to the client because routers already read secrets in plaintext as part of normal forwarding,” the study explained.

Previously reliable routers can become dangerous if they reuse leaked credentials through weak relays. To prevent these attacks, the research team suggested that developers should never allow private keys or sensitive phrases to pass through an AI agent session. 

A permanent solution would require AI companies to use cryptographic signatures. Such a system would allow an agent to mathematically prove that instructions came from the actual model rather than a tampered third-party source.

“LLM API routers sit on a critical trust boundary that the ecosystem currently treats as transparent transport,” the paper concluded.



Source link

Leave a Reply

Exit mobile version