A million agents, zero trust: the security crisis no one is talking about
Picture this: it's 2027. Your company runs 200 AI agents. They schedule meetings, process invoices, respond to customers, deploy code, manage inventory. They talk to each other through MCP, A2A, and protocols that don't exist yet. They connect to thousands of external tools and services.
Now ask yourself: who's watching what they do?
The world we're building into
The trajectory is clear. We're moving from a world of humans using AI tools to a world of autonomous agents operating at scale. The numbers tell the story:
- MCP already has thousands of active servers since Linux Foundation adopted it in late 2025
- Every major AI lab is building agent frameworks
- Enterprise agent deployments are doubling quarter over quarter
- The agent economy is projected at $50B+ by 2030
This isn't speculation. It's infrastructure being built right now, in production, at scale.
And here's the problem: we're building the highways before we've invented seatbelts.
What happens when agents interact at scale
When a single agent calls a single tool, the risk surface is manageable. But agent-to-agent communication creates something fundamentally different: emergent trust networks that no one designed and no one audits.
Consider a simple chain: Agent A calls Agent B, which queries an MCP server, which invokes a tool that accesses a database. Each hop in that chain is a trust decision. And none of them have been explicitly authorized.
The problems multiply:
Cascading permissions
Agent A has access to your email. Agent B has access to your code repository. If A can invoke B, it effectively has access to both. Now multiply this by 200 agents, each with their own tool connections. The permission graph becomes incomprehensible.
Identity and attribution
When an agent takes an action — sends an email, deploys code, modifies a database — who is responsible? The user who configured it? The agent framework? The MCP server that provided the tool? Today, there's no standard for agent identity, no audit trail that spans the full chain.
Supply chain attacks at agent scale
Every MCP server an agent connects to is a dependency. Every tool is an attack surface. Tool poisoning — where a malicious MCP server manipulates agent behavior through crafted tool descriptions — is the npm typosquatting of the agent era, except the blast radius is much larger because agents act autonomously.
We're tracking this at Aguara Watch, monitoring 40K+ skills across 7 registries. What we see is a rapidly expanding ecosystem with minimal security review.
Rug pulls and mutation
An MCP server that was safe yesterday might not be safe today. Servers can change their tool definitions, modify their behavior, or be compromised entirely. Static, point-in-time security assessments aren't enough for an ecosystem that changes daily.
What zero trust means for agents
The traditional zero trust model — "never trust, always verify" — needs to be adapted for agents. Here's how I think about it:
1. Every tool invocation should be authorized, not just the connection. Connecting to an MCP server shouldn't grant blanket access to all its tools. Agents need fine-grained, per-tool, per-invocation authorization. This is what we're building into Oktsec as a runtime security layer.
2. Agent behavior should be continuously monitored, not just at deployment. An agent's behavior is a function of its prompt, tools, and context. Any of these can change. Runtime monitoring that detects anomalous agent behavior isn't a luxury — it's the baseline.
3. The supply chain needs continuous scanning. Every MCP server in your agent's configuration is a dependency that needs to be audited. Aguara Scanner does this with 148+ security rules across 13 categories, but the industry needs this to be standard practice, not an afterthought.
4. Cross-agent communication needs explicit trust boundaries. When agents talk to agents, trust should be scoped, explicit, and auditable. The A2A protocol is a step forward, but we need security layers on top, not bolted on later.
The infrastructure gap
Right now, the AI agent ecosystem is roughly where web applications were in 2005. We have powerful capabilities, rapidly growing adoption, and almost no security infrastructure.
The difference is we don't have a decade to figure it out. The adoption curve for AI agents is compressed into years, not decades. The security infrastructure needs to be built alongside the agents, not after the first wave of breaches.
That's the bet I'm making with Aguara and Oktsec — that the teams building AI agents today will need security tooling that understands this new world. Not adapted from web security, not retrofitted from cloud security, but built from the ground up for a world where millions of agents interact autonomously.
We're at the very beginning. The decisions we make about agent security infrastructure now will shape the ecosystem for years to come.
The window to get this right is open. It won't be open forever.
Does this resonate with what you're building?
Schedule a call