Two protocols now define how AI agents interact with the world. MCP — the Model Context Protocol — handles how agents talk to tools and data sources. Google’s A2A — Agent-to-Agent protocol — handles how agents talk to each other. Both reached critical mass in March 2026. Together they form a complete stack. And your database is the foundation both protocols ultimately depend on.
If you’re running any kind of data infrastructure, understanding this two-layer architecture isn’t optional anymore. It determines how your data gets consumed by the next generation of applications.
The Protocol Split: Tools vs. Peers
The distinction between MCP and A2A maps to a fundamental difference in how agents interact with external systems.
MCP is for agent-to-tool communication. When an agent needs to query a database, call a REST API, read a file, or invoke any passive capability, it uses MCP. The tool doesn’t have its own reasoning — it exposes a function, the agent calls it, and it returns a result. MCP standardized this interaction pattern with a schema for tool discovery, typed inputs/outputs, and safety annotations.
The numbers speak for themselves: MCP crossed 97 million monthly SDK downloads in February 2026. Every major AI provider — Anthropic, OpenAI, Google, Microsoft, Amazon — now supports it. It moved from Anthropic’s internal experiment to an industry standard governed by the Agentic AI Foundation under the Linux Foundation.
A2A is for agent-to-agent communication. When an agent needs to delegate a task to another agent that has its own reasoning, planning, and autonomy, it uses A2A. The key difference is that the peer agent is opaque — you don’t call a function, you describe what you need and let the other agent figure out how to accomplish it.
Google launched A2A in April 2025 with over 50 partners. By March 2026, the enterprise adoption list reads like a who’s who: Adobe is using A2A to make its distributed agents interoperable with Google Cloud’s ecosystem. S&P Global Market Intelligence adopted it for inter-agent communication across its data services. Microsoft added A2A support in Azure AI Foundry and Copilot Studio. SAP wired it into Joule, its AI assistant.
Here’s how the two protocols layer:
┌─────────────────────────────────────────────┐
│ Agent-to-Agent (A2A) │
│ "Find me the top 10 customers by LTV │
│ and draft a retention campaign" │
├─────────────────────────────────────────────┤
│ Agent A │ Agent B │
│ (Orchestrator) │ (Marketing) │
│ │ │ │ │
│ ▼ │ ▼ │
│ ┌─────────┐ │ ┌─────────┐ │
│ │ MCP │ │ │ MCP │ │
│ │ Tools │ │ │ Tools │ │
│ └────┬────┘ │ └────┬────┘ │
│ ▼ │ ▼ │
│ Database API │ Email Service │
│ (Faucet) │ (SendGrid) │
└─────────────────────────────────────────────┘
Agent A uses MCP to query your database for customer data. It delegates the campaign task to Agent B via A2A. Agent B uses MCP to call the email service. Neither agent needs to know the other’s implementation. Both need structured, typed access to the services underneath.
Why This Matters for Database Teams
If you maintain databases that get consumed by applications — which is most databases — this two-protocol world changes your exposure surface in three concrete ways.
1. Your Database Is Now a Tool, Not Just a Datastore
In the MCP model, your database publishes a set of tools that agents can discover and invoke. This is fundamentally different from a connection string in a config file. Tools have names, descriptions, input schemas, and safety annotations. They’re discoverable at runtime. An agent connecting to your MCP server can enumerate every table, understand what each endpoint does, and reason about whether a particular operation is safe before executing it.
Here’s what that looks like with Faucet:
# Start Faucet pointing at your database
faucet serve --db postgres://localhost/myapp
# Every table is now discoverable as both REST endpoints and MCP tools
# GET /api/customers → MCP tool: query_customers
# POST /api/customers → MCP tool: create_customer
# PUT /api/customers/:id → MCP tool: update_customer
# DELETE /api/customers/:id → MCP tool: delete_customer
Each tool gets appropriate safety annotations automatically — reads are marked readOnlyHint: true, deletes are marked destructiveHint: true. The agent can reason about what it’s doing before it does it.
The alternative — giving agents raw SQL access — is the equivalent of giving every new hire root on day one and hoping they read the wiki.
2. Agent-to-Agent Delegation Multiplies Your Query Surface
When a single agent queries your database, you can predict the access patterns. When that agent can delegate to other agents via A2A, and those agents can sub-delegate, your query surface becomes combinatorial.
Consider a real-world scenario: a finance agent asks a reporting agent to “generate the Q1 revenue summary.” The reporting agent connects to your database via MCP, discovers the relevant tables, and runs a series of queries. But the reporting agent might also delegate to a visualization agent to create charts, which might query the same database for different aggregations. And the finance agent might simultaneously ask a compliance agent to verify the numbers, triggering its own set of queries.
None of these agents know about each other. They all hit your database through MCP. Your API layer needs to handle:
- Concurrent access from agents you didn’t anticipate
- Rate limiting per agent identity, not just per API key
- Audit trails that trace back through the A2A delegation chain
- Read-only enforcement for agents that should never write
This is where a proper API layer becomes non-negotiable. With Faucet’s RBAC, you can create roles that map to agent capabilities:
# Read-only role for reporting agents
faucet role create reporting-reader \
--allow "GET /api/orders" \
--allow "GET /api/customers" \
--allow "GET /api/revenue_summary"
# Write role for the CRM agent that manages customer records
faucet role create crm-writer \
--allow "GET /api/customers" \
--allow "POST /api/customers" \
--allow "PUT /api/customers/*"
Each agent gets credentials scoped to its role. The orchestrating agent can delegate freely via A2A, but the downstream agents can only do what their MCP tool permissions allow.
3. Agent Cards Meet API Discovery
A2A introduces the concept of Agent Cards — JSON documents published at a well-known URL that describe an agent’s capabilities, supported modalities, authentication requirements, and pricing. They’re the equivalent of an OpenAPI spec, but for agents instead of APIs.
This creates an interesting convergence. Your database API already has an OpenAPI spec describing its endpoints. The MCP server exposes those same capabilities as tools. And now, if you wrap your database in an A2A-capable agent, the Agent Card describes the agent’s high-level capabilities to other agents.
All three layers describe the same underlying data:
| Layer | Format | Audience | Example |
|---|---|---|---|
| REST API | OpenAPI 3.1 | Human developers, API clients | GET /api/customers returns Customer[] |
| MCP Server | Tool definitions | AI agents (tool use) | query_customers tool with typed input schema |
| A2A Agent | Agent Card | Other AI agents (delegation) | “I can answer questions about customer data” |
Faucet generates the first two automatically from your database schema — the OpenAPI spec and the MCP tool definitions. The A2A Agent Card is the next logical layer, and it’s mostly a projection of the same metadata.
The point is: if your database API layer generates clean, typed, well-documented endpoints, all three protocol layers benefit. If your API layer is a mess of undocumented stored procedures and ad-hoc query strings, every layer above it inherits that mess.
The Four-Protocol Landscape
MCP and A2A are the two protocols with the most traction, but the full landscape includes four protocols with meaningful adoption as of Q1 2026:
| Protocol | Purpose | Backed By | Status |
|---|---|---|---|
| MCP | Agent ↔ Tool | Anthropic → Linux Foundation | 97M monthly SDK downloads |
| A2A | Agent ↔ Agent | Google + 50 launch partners | Adobe, SAP, Microsoft, S&P Global |
| ACP | Agent Communication | IBM / BeeAI | Emerging |
| UCP | Unified Communication | Cisco | Early stage |
For database teams, MCP is the protocol that matters most today — it’s how agents actually access your data. A2A matters because it determines who sends those agents to your doorstep and what they’re trying to accomplish. ACP and UCP are worth watching but don’t require action yet.
The practical implication: if your database already speaks MCP, you’re well-positioned. The A2A layer sits above MCP, not beside it. Agents that receive delegated tasks via A2A still use MCP to access tools and data. Getting your MCP integration right is the foundation everything else builds on.
What Production Looks Like
Let’s walk through what a production A2A + MCP setup looks like with a real database. Say you have a PostgreSQL database backing an e-commerce application.
Step 1: Expose the database as a REST API with MCP support.
# Install Faucet
curl -fsSL https://get.faucet.dev | sh
# Point it at your database
faucet serve --db postgres://user:pass@localhost/ecommerce --port 8080
You now have REST endpoints for every table, an OpenAPI 3.1 spec at /api/_spec, and an MCP server at /mcp — all generated automatically from your schema.
Step 2: Configure agent access with RBAC.
# Create scoped API keys for different agent roles
faucet apikey create --role reader --name "analytics-agent"
faucet apikey create --role writer --name "order-management-agent"
Step 3: Connect agents via MCP.
Any MCP-compatible agent can now discover and use your database tools:
{
"mcpServers": {
"ecommerce-db": {
"url": "http://localhost:8080/mcp",
"headers": {
"Authorization": "Bearer faucet_key_analytics_abc123"
}
}
}
}
Step 4: Let A2A handle the orchestration.
When an A2A orchestrator delegates a task like “analyze last month’s sales trends,” the downstream analytics agent connects to your Faucet instance via MCP, runs the appropriate queries against its read-only scoped credentials, and returns results through the A2A delegation chain. Your database never needed to know about A2A — it just served MCP tool calls through a governed API layer.
The Context Window Problem Gets Worse with Multi-Agent
Perplexity CTO Denis Yarats raised an important concern at the Ask 2026 conference: MCP tool descriptions consume 40–50% of available context windows before agents do any actual work. This is already a problem with single-agent setups.
With A2A multi-agent architectures, it compounds. Each agent in the delegation chain loads its own set of MCP tools. If every agent connects to the same database with the full tool set, you’re paying the context window tax multiple times across the system.
The solution is the same principle that makes microservices work: each agent should only load the tools it needs. With Faucet’s RBAC, this happens naturally — a read-only role only sees read endpoints, which means the MCP tool list is half the size. An agent scoped to three tables only loads tools for those tables, not all fifty.
Lean tool definitions aren’t just a performance optimization. In a multi-agent world, they’re an architectural necessity.
What Comes Next
The MCP 2026 roadmap, published March 9 by lead maintainer David Soria Parra, identifies four focus areas: scaling Streamable HTTP transport for horizontal deployments, closing lifecycle gaps in the Tasks primitive, building enterprise readiness features around audit trails and SSO, and publishing a standard metadata format for server discovery without a live connection.
That last item — server discovery metadata — is particularly relevant. It would allow MCP servers to publish their capabilities in a format that A2A Agent Cards could reference directly. The gap between “here are my MCP tools” and “here’s what I can do as an agent” is narrowing.
Meanwhile, SurePath AI launched MCP Policy Controls on March 12, giving security teams real-time control over which MCP servers and tools AI clients can access. This is enterprise plumbing — the kind of unglamorous infrastructure work that signals a technology is crossing from developer experimentation to production deployment.
The two-protocol world is here. MCP handles how agents access your data. A2A handles how agents find each other and delegate work. Your database is the resource both protocols ultimately serve. Making it accessible through a clean, typed, governed API layer isn’t just good practice anymore — it’s infrastructure.
Getting Started
Get your database speaking MCP in under a minute:
# Install Faucet
curl -fsSL https://get.faucet.dev | sh
# Connect to any supported database (PostgreSQL, MySQL, SQL Server, Oracle, SQLite, Snowflake)
faucet serve --db postgres://localhost/mydb
# Your database now has:
# - REST API endpoints for every table
# - OpenAPI 3.1 spec at /api/_spec
# - MCP server at /mcp with typed tool definitions
# - Safety annotations on every operation
Faucet supports PostgreSQL, MySQL, SQL Server, Oracle, SQLite, and Snowflake. Single binary, no runtime dependencies, no code generation step. Your database becomes agent-ready in the time it takes to run two commands.