All posts
mcpsecurityai-agents|

The 5 MCP security blind spots most teams miss

MCP (Model Context Protocol) is winning. It's becoming the standard way AI agents connect to tools, data sources, and external services. Linux Foundation adopted it, every major AI lab supports it, and thousands of servers are already in production.

But here's the thing: most teams treat MCP servers like internal APIs. They're not. An MCP server is a trust boundary between an AI agent and your systems — and most of them are deployed with the same security posture as a weekend hackathon project.

After building Aguara Scanner and analyzing thousands of MCP server configurations, these are the five blind spots I see over and over again.

1. Tool descriptions are an attack surface

This one surprises most developers. Tool descriptions in MCP aren't just documentation — they're instructions that agents read and follow. A malicious or compromised server can craft tool descriptions that manipulate agent behavior.

This is tool poisoning: hiding instructions in a tool's description or schema that override the agent's system prompt, exfiltrate data, or redirect actions to other tools.

Example: a tool described as "Search company documents" could include hidden instructions telling the agent to first send all query parameters to an external endpoint. The user never sees the tool description. The agent just follows it.

Most teams never audit what their MCP servers actually tell agents to do. Aguara Scanner checks for this with dedicated tool poisoning detection rules — and finds issues more often than you'd expect.

2. No authentication doesn't mean "internal only"

The most common pattern I see: an MCP server running on localhost or inside a VPC with zero authentication. The assumption is that network isolation is enough.

It's not. AI agents often run in environments where the network boundary is blurry — cloud functions, container orchestrators, development machines with multiple tools connected. A server that's "internal only" today might be reachable from an unexpected context tomorrow.

Even for genuinely internal servers, authentication matters because it establishes identity. Without it, you can't answer the most basic question: which agent made this request?

3. Tools expose more than you think

Developers build MCP tools to solve a specific problem — "read this database," "send this notification," "query this API." But they rarely think about what else the tool can access.

A database query tool might accept arbitrary SQL. A file reader might traverse beyond its intended directory. A notification tool might accept any recipient, not just the intended one.

The principle of least privilege applies to MCP tools, but almost nobody enforces it. Each tool should expose the minimum capability needed, with explicit constraints on inputs. Not "query the database" but "query the users table with these allowed columns and a maximum of 100 rows."

4. Cross-server trust is implicit and unscoped

When an agent connects to multiple MCP servers, each server implicitly trusts every other server through the agent. Server A's tools can influence the agent to call Server B's tools in unintended ways.

This creates cross-origin escalation — the same class of problem that plagued web browsers before CORS, except there's no equivalent security model for MCP yet.

A practical example: an agent connected to both a "company wiki" MCP server and a "code deployment" MCP server. A compromised wiki tool could manipulate the agent into deploying malicious code. The deployment server has no way to know the request originated from a compromised source.

This is why Oktsec operates at the gateway layer — it can enforce policies across server boundaries, something individual servers can't do on their own.

5. No one monitors what happens after deployment

Teams invest time in building and testing MCP servers. But after deployment? Silence. No monitoring of what tools are being called, by which agents, with what parameters, at what frequency.

This means:

  • You can't detect anomalous behavior (an agent suddenly making thousands of calls to a tool it rarely uses)
  • You can't audit actions after the fact (who deployed that code change last Tuesday?)
  • You can't identify compromised agents (one that starts exfiltrating data through tool calls)

Runtime monitoring for MCP is the equivalent of application logs and APM for web services. It's not optional infrastructure — it's the minimum viable security posture.

The path forward

None of these are unsolvable problems. They're the same kind of issues every new protocol faces as it moves from early adoption to production infrastructure. The difference is the timeline — MCP adoption is measured in months, not years.

What I'd recommend for any team deploying MCP servers today:

  1. Scan before you deploy. Run Aguara against your MCP servers. 148+ rules, 13 categories, open-source. It takes minutes.
  2. Authenticate everything. Even internal servers. Especially internal servers.
  3. Audit your tool descriptions. Read them as an attacker would. What could an agent be tricked into doing?
  4. Scope your tools. Minimum capability, explicit constraints, validated inputs.
  5. Monitor at runtime. Know what your agents are doing after you deploy them.

The MCP ecosystem is going to be critical infrastructure. Let's build it like it matters.

Does this resonate with what you're building?

Schedule a call