Today, as hundreds of developers gather at the Javits Center for the first MCP Dev Summit North America, the Model Context Protocol has officially crossed from “interesting Anthropic side project” to “industry infrastructure.” The numbers tell the story: over 10,000 MCP servers indexed across public registries, 97 million monthly SDK downloads, and official support from Anthropic, OpenAI, Google, and Microsoft.
But behind those impressive numbers is a pattern that should concern anyone building AI agents that talk to databases: fragmentation.
The Vendor MCP Race Is On
In the past 30 days alone, three of the biggest database vendors on the planet shipped MCP servers:
Oracle announced the Autonomous AI Database MCP Server on March 24 at the Oracle AI World Tour in London. It lets AI agents access Oracle databases through MCP without custom integration code. They also shipped the SQLcl MCP Server through the VS Code extension, which auto-registers with Microsoft Copilot on activation.
Google expanded its MCP Toolbox for Databases (formerly Gen AI Toolbox) to support AlloyDB, Spanner, Cloud SQL for PostgreSQL, MySQL, and SQL Server. It’s open-source, and third-party contributors have added Neo4j and Dgraph support.
Microsoft advanced its “agentic AI with Microsoft databases” initiative on March 18, positioning Azure SQL as the AI-native database with built-in MCP capabilities.
Meanwhile, Preset launched Preset MCP on April 1 for enterprise analytics — AI that builds dashboards, runs SQL, and explores datasets through MCP.
This is validation. When Oracle, Google, and Microsoft all independently decide that MCP is the way AI agents should access databases, the protocol debate is over. MCP won.
But there’s a catch.
The Fragmentation Tax
Here’s what happens when every vendor ships their own MCP server: you get a different MCP server for every database.
Oracle’s MCP server speaks Oracle. Google’s MCP Toolbox speaks AlloyDB, Spanner, and Cloud SQL. Microsoft’s speaks Azure SQL. Each has its own deployment model, its own authentication scheme, its own tool surface area, and its own opinions about what operations AI agents should be allowed to perform.
If your stack is homogeneous — all Oracle, all Google Cloud, all Azure — this is fine. You pick your vendor’s MCP server and move on.
But most organizations don’t have homogeneous stacks. The 2025 Percona survey found that 89% of enterprises run two or more database engines in production. The median is four. A typical enterprise might have PostgreSQL for the application tier, MySQL for a legacy service, SQL Server for the finance team’s reporting database, and an Oracle instance that nobody wants to touch but everybody depends on.
In that world, “every vendor ships an MCP server” means you’re now running four MCP servers with four different deployment models, four authentication schemes, and four sets of tool definitions that your AI agents need to understand.
This is the database fragmentation tax, and it’s the problem nobody at the MCP Dev Summit is talking about — because the people on stage are the ones selling you individual servers.
What Fragmentation Looks Like in Practice
Let’s make this concrete. Say you’re building an internal AI agent that helps your support team answer customer questions. The agent needs to:
- Look up the customer in your PostgreSQL application database
- Check their billing status in the MySQL legacy billing system
- Pull their recent support tickets from a SQL Server data warehouse
With the vendor-specific approach, your agent configuration looks like this:
{
"mcpServers": {
"google-toolbox-postgres": {
"command": "mcp-toolbox",
"args": ["--source", "postgres://app-db:5432/customers"],
"env": {
"GOOGLE_APPLICATION_CREDENTIALS": "/path/to/gcp-creds.json"
}
},
"mysql-community-server": {
"command": "mysql-mcp-server",
"args": ["--host", "billing-mysql.internal", "--port", "3306"],
"env": {
"MYSQL_USER": "agent",
"MYSQL_PASSWORD": "..."
}
},
"azure-sql-mcp": {
"command": "azure-sql-mcp",
"args": ["--server", "reporting.database.windows.net"],
"env": {
"AZURE_TENANT_ID": "...",
"AZURE_CLIENT_ID": "...",
"AZURE_CLIENT_SECRET": "..."
}
}
}
}
Three servers. Three authentication mechanisms. Three sets of tools with slightly different naming conventions (query_database vs execute_sql vs run_query). Three things to monitor, upgrade, and secure.
Now imagine this across an organization with dozens of databases. The operational overhead scales linearly with database count.
With Faucet, the same setup collapses to:
# Start Faucet pointing at all three databases
faucet serve \
--db postgres://app-db:5432/customers \
--db mysql://agent:pass@billing-mysql:3306/billing \
--db sqlserver://sa:[email protected]/warehouse
One binary. One MCP server. One authentication layer. One set of consistent tool names. Every database gets the same REST API surface and the same MCP tool definitions, regardless of the engine underneath.
{
"mcpServers": {
"faucet": {
"command": "faucet",
"args": ["mcp", "--db", "postgres://...", "--db", "mysql://...", "--db", "sqlserver://..."]
}
}
}
Why Consistency Matters for AI Agents
This isn’t just about reducing YAML in your config files. Consistency in tool definitions directly impacts agent performance.
When an AI agent encounters three different MCP servers with three different tool schemas, it has to spend context window tokens understanding the differences. Is it table_name or tableName? Does this server return results as rows or data? Does query mean “read-only SQL” on this server and “any SQL” on that one?
We wrote about the MCP context window tax last week — every tool definition your agent loads costs tokens. Loading three database MCP servers with overlapping but inconsistent tool schemas is the worst case: maximum token cost, minimum clarity.
A single MCP server with a unified schema across all databases means your agent learns one interface and it works everywhere. That’s fewer tokens spent on tool comprehension and more tokens available for actually reasoning about the user’s question.
The Summit’s Blind Spot
The MCP Dev Summit program features 95 sessions, with talks on protocol evolution, conformance testing, security research, and scalable agent design. Speakers from Anthropic, Datadog, Hugging Face, and Microsoft are presenting production deployment lessons.
One session title stands out: “MCP at 18 Months: Protocols, Patterns, and What We Didn’t See Coming.” The protocol has been on an incredible trajectory since Anthropic open-sourced it in late 2024 and donated it to the Agentic AI Foundation (under the Linux Foundation) in December 2025. The numbers are staggering — Bloomberry’s analysis of 1,400+ MCP servers found that database access is one of the most common categories, and SkillsIndex tracked an 873% increase in indexed servers from mid-2025 to early 2026.
But the summit’s focus on individual server implementations misses the meta-problem. The New Stack published a piece on MCP’s production growing pains, noting that “companies are clearly rushing to not be left behind the MCP hype, but the result is an ecosystem that’s growing incredibly fast but securing slowly.” The same applies to consistency — the ecosystem is growing incredibly fast but fragmenting just as quickly.
Conformance testing (another summit topic) helps ensure that MCP servers implement the protocol correctly. But it doesn’t ensure they implement database access consistently. Two MCP servers can both be 100% protocol-compliant and still present wildly different tool interfaces for the same fundamental operation: reading rows from a table.
What the Ecosystem Actually Needs
The MCP ecosystem doesn’t need more database-specific servers. It needs fewer, better ones that handle multiple databases behind a single interface. Here’s why:
1. Agents should think about data, not databases. When a support agent looks up a customer, it shouldn’t need to know that customer records live in PostgreSQL and billing records live in MySQL. It should just ask for the data. The database engine is an implementation detail.
2. RBAC should be centralized, not scattered. If you’re running three MCP servers, you’re managing three sets of access controls. When an employee leaves, you need to revoke access in three places. When you add a new role, you configure it three times. Faucet’s built-in RBAC layer lets you define roles once and apply them across all connected databases.
3. API consistency enables composability. When every database exposes the same REST endpoints and MCP tools, you can build higher-level abstractions that work universally. A generic “data explorer” agent works across your entire data estate, not just one database.
4. Operational overhead compounds. One MCP server to deploy, monitor, and upgrade is manageable. Ten is a full-time job. The vendor-specific model scales operational cost with database count. The unified model keeps it constant.
The REST API Angle
Here’s something else the MCP-only crowd misses: not everything talking to your database is an AI agent.
Your React frontend needs data. Your mobile app needs data. Your partner integrations need data. Your cron jobs need data. MCP is great for AI agents, but it’s not a replacement for REST APIs.
Faucet gives you both from the same binary. Every table gets a REST API with filtering, pagination, sorting, and field selection. And every table gets MCP tool access for AI agents. Same data, same access controls, two consumption patterns:
# REST API — for your frontend, mobile app, integrations
curl http://localhost:8080/api/[email protected]
# MCP — for your AI agents
# Same data, same RBAC, accessed through tool calls
The vendor MCP servers give you MCP. Period. If you also need REST, you’re adding PostgREST or Hasura or a custom API layer — another server, another deployment, another thing to secure.
Where This Is Headed
The MCP Dev Summit marks a turning point. The protocol is no longer a bet — it’s table stakes. Every major database vendor has acknowledged that AI agents need standardized database access, and MCP is the standard.
But we’re in the “everybody ships their own thing” phase, which always precedes the “wait, we need a unification layer” phase. We saw this with cloud APIs (leading to Terraform), container orchestration (leading to Kubernetes), and CI/CD (leading to GitHub Actions).
For database MCP access, the unification layer is a tool that treats the database engine as a configuration parameter, not an architecture decision. That’s what Faucet does.
The 10,000+ MCP servers in the ecosystem are a testament to the protocol’s success. But for the specific problem of AI agents accessing databases, you don’t need 10,000 servers. You need one good one that handles all your databases, enforces consistent access controls, and gives you REST APIs as a bonus.
Getting Started
If you’re at the MCP Dev Summit today and want to try a unified approach to database MCP access, Faucet installs in one command:
curl -fsSL https://get.faucet.dev | sh
Point it at any PostgreSQL, MySQL, SQL Server, Oracle, Snowflake, or SQLite database:
# Start with your database — REST API + MCP server in one binary
faucet serve --db postgres://user:pass@localhost:5432/mydb
# Your AI agent can now access it via MCP
# Your frontend can hit the REST API at http://localhost:8080/api/
Connect it to Claude Code, Cursor, or any MCP-compatible client:
claude mcp add faucet -- faucet mcp --db postgres://localhost:5432/mydb
One binary. Seven databases. REST + MCP. No vendor lock-in.
While the industry figures out which vendor’s MCP server to use for which database, you can skip the fragmentation tax entirely and ship today.