When you invite an AI to your digital party, you’d think giving it access to your SSH key would be the riskiest thing on the table. Turns out, it’s not even the most embarrassing mistake you can make. The real kicker? Trusting a weather app. Or any other seemingly harmless tool you hand over without blinking.
There’s a deliciously sneaky detail in how AI systems juggle tools. Imagine you’ve got an AI with access to both your SSH key and some weather-checking plugin from a company you vaguely remember googling once. According to a comment in a Hacker News thread, this combo is more lethal than it looks:
“The tool poisoning attack allows the provider of one tool to cause the AI to use another tool.”
Let’s unpack that. It’s not just about crossing privilege boundaries. This isn’t your classic buffer overflow or root escalation tale. The threat isn’t “what the AI can do,” but “who can trick it into doing something it was already allowed to do.” That random tool provider? If they can nudge the AI to call a function—say, the one holding your SSH key—they’ve effectively bypassed trust boundaries without ever breaking them. All perfectly legal, all perfectly stupid.
So, you didn’t just give your key to the AI. You unknowingly gave it to the weather app vendor, too. Because the AI is too helpful. And your threat model didn’t include “malicious drizzle plugin.”
None of this is about the AI going rogue. It’s about us not understanding the system-level implications of composability. We’ve moved from “never trust user input” to “never trust third-party plugins with indirect access to privileged calls.” Which is harder to put on a T-shirt, but far more relevant.
The “S” in MCP may technically stand for Security, but apparently, it’s silent.