Why Trust - Not Discovery - Is the Real Infrastructure Problem for AI Agents
We're about to repeat the same mistake with AI agents.
Everyone's talking about building a "DNS for agents" - identity, routing, discovery. There are already competing IETF drafts, OWASP standards, and half a dozen startups racing to become the Agent Name Service. It feels like a clean distributed systems problem.
It's not.
DNS Worked Because Humans Were in the Loop
DNS succeeded for a straightforward reason: humans made the trust decisions. You typed a domain. You decided what to trust. The system just resolved names to addresses - it didn't need to answer whether the thing at that address was safe to interact with.
Agents change that equation completely.
We're now talking about autonomous systems calling other agents, making decisions, and triggering real-world actions - all without a human approving each interaction. When I build agent pipelines for clients at NirixAI, this is the question that keeps coming up: not "how do we connect these agents" but "how do we make sure agent B is actually authorized to do what agent A is asking it to do?"
So the real problem isn't "how do agents find each other?" It's "who decides which agents are trustworthy?"
The Scale of What's Coming
Here's what makes this urgent. Gartner estimates that over 40% of enterprise workflows will involve autonomous agents by the end of this year. Non-human identities - service accounts, API tokens, machine roles, agent credentials - already outnumber human users by up to 100:1 in most organizations. And analysts project that one in four enterprise breaches by 2028 could stem from AI agent exploitation.
We're not talking about a theoretical problem. This is an active attack surface that's expanding faster than our security models can adapt.
Traditional IAM systems were designed for humans. OAuth, OIDC, SAML - they all assume a human is somewhere in the authentication loop. But agents have fundamentally different identity lifecycles. They spawn, evolve, fork, and retire in seconds. They need capability attestation - not just "who are you" but "what are you allowed to do right now." And they interact across protocols that weren't designed to talk to each other.
The Centralization Trap
This is where things get uncomfortable.
Who owns agent identity? Who verifies it? Who controls access?
If one entity controls it, we get centralization. Think about what happened with certificate authorities - a handful of organizations became the gatekeepers of internet trust, and every compromise (DigiNotar, Symantec) sent shockwaves through the entire ecosystem. Now imagine that, but for autonomous systems making financial transactions and accessing sensitive data.
If no one controls it, we get chaos. Open agent registries without strong governance become playgrounds for impersonation attacks and registry poisoning.
NIST launched its AI Agent Standards Initiative in February, which is a good sign. The OWASP Agent Name Service spec uses PKI-backed identity verification with zero-knowledge proofs for capability validation. There's the LOKA Protocol proposing decentralized ethical consensus. Multiple IETF drafts are competing for how agent discovery should work at the DNS level.
But here's what I keep noticing - most of these efforts focus heavily on the discovery and routing layer. The trust and governance layer gets a section in the spec, maybe a paragraph about "future work." That's backwards.
What a Trust Layer Actually Needs
From my experience building multi-agent systems, trust between agents requires at minimum:
Verifiable identity - not just a name, but cryptographic proof of provenance. Decentralized Identifiers (DIDs) anchored in distributed ledgers are the most promising approach here. They give agents self-sovereign identities that aren't dependent on a single authority.
Capability attestation - third-party verifiable credentials that say "this agent can do X, authorized by Y, valid until Z." Not self-reported capabilities. Verified ones.
Runtime access control - static permissions don't work when agents operate dynamically. You need just-in-time provisioning and continuous policy enforcement at the identity level, not in the prompt.
Action traceability - every agent interaction needs an audit trail. When agent A asks agent B to transfer funds, you need to reconstruct the full chain of delegation and authorization after the fact.
Privacy-preserving verification - zero-knowledge proofs let agents prove they have the right credentials without revealing everything about themselves. This matters because full disclosure creates its own attack surface.
Microsoft's approach of treating every AI agent as a first-class identity - inventoried, owned, governed with the same rigor as human identities - is directionally right. But even that assumes you have a single organizational boundary. Agent-to-agent trust across organizations is a harder problem by an order of magnitude.
Governance Is Harder Than Engineering
This isn't just protocol design. It's governance. And governance is always harder than engineering.
Most teams I talk to are focused on agent frameworks, orchestration layers, and tool calling. Those are real engineering problems and they matter. But the bottleneck won't be "can my agents call tools?" It'll be "can my agents trust the other agents they need to work with?"
The LOKA Protocol gets this right in principle - it embeds identity, trust, and ethics into the protocol layer itself rather than treating them as application-level concerns. But we're still in the competing-standards phase, with ANS, DNS-AID, AgentDNS, NANDA, and ACDP all proposing different approaches. History tells us this fragmentation phase can last years, and in the meantime, people are shipping agent systems into production with minimal trust infrastructure.
Without a trust layer, scaling agent ecosystems will break. Not might break. Will break. The $236 billion market that analysts project for AI agents by 2034 depends entirely on solving this.
What Engineers Should Do Now
If you're building agent systems today, don't wait for the standards to settle. Here's what's practical right now:
Treat agents as identities, not tools. Give every agent a distinct identity with clear ownership, scoped permissions, and lifecycle management. Don't let agents inherit their creator's credentials.
Enforce authorization at the identity layer, not the prompt layer. If your agent's access control lives in its system prompt, you've already lost. Use proper identity infrastructure.
Design for auditability from day one. You will need to explain what your agents did and why. Bake in logging and traceability before you need it, because retrofitting it is painful.
Stay close to the standards conversation. NIST's request for information on AI agent security closes in April. OWASP's ANS spec is open for review. These decisions will shape the infrastructure your agents run on.
The question isn't whether trust infrastructure for agents matters. It's whether we build it deliberately, or patch it together after the first major incident forces our hand.
I'd rather not wait for the incident.