Platform engineering had a breakout year. Gartner now predicts 80% of large software engineering organizations will have dedicated platform teams by the end of 2026. The pitch is compelling: stop every team from rebuilding CI pipelines, secrets management, observability stacks, and deployment infrastructure from scratch. Package it as internal products with self-service APIs. Reduce cognitive load. Ship faster.
It mostly works. The data backs it up — organizations that adopt internal developer platforms report 40–50% reductions in cognitive load and measurable improvements in deployment frequency. Platform engineering has genuinely fixed a class of problems that were dragging teams down.
But platform engineering has a blind spot: the database.
The Sprawl That Doesn’t Get Fixed
When you survey developers about what fragments their attention, the numbers are stark. Developers use an average of 7.4 tools daily. 75% lose between 6 and 15 hours weekly because of tool fragmentation. Nearly half — 48% — report feeling overwhelmed by the number of tools they use daily.
Platform engineering attacks that sprawl by creating golden paths. Need a Kubernetes cluster? Run a self-service template. Need a CI pipeline? There’s a catalog entry for that. Need to rotate secrets? The platform handles it through a clean API.
What you almost never find in those same internal developer portals: a clean, governed path to query a production database.
Instead, every team reinvents this. The backend team builds a data access layer. The data team exposes a different set of endpoints. The analytics team has read credentials hard-coded in a notebook somewhere. The AI team is writing LangChain code that connects to production Postgres directly because there’s no other option that doesn’t involve a two-week ticket queue.
According to the DevOps.com State of Developer Experience Report, 98% of developers say that better APIs would reduce the number of tools they have to use. The irony is that most platform engineering initiatives still leave database access — the foundational data layer — to be solved ad hoc by every team, every time.
Why Database Access Is Different
Every other infrastructure concern that platform engineering tackles follows a predictable pattern: the resource has a clear owner, a standardized interaction model, and a lifecycle the platform can manage.
Databases don’t fit that pattern cleanly.
A Kubernetes cluster is provisioned on demand and torn down. A secret has a rotation policy. A build pipeline has inputs and outputs. But a production database has decades of schema history, data owned by multiple teams, access patterns ranging from high-throughput OLTP to ad hoc analytical queries, compliance requirements that vary by table and column, and business logic embedded in stored procedures that nobody fully understands anymore.
The standard platform engineering response — “add it to the service catalog” — doesn’t capture this complexity. You can’t template a database the way you template a pipeline.
What you can do is put a governed API layer in front of it.
What Self-Service Database APIs Actually Look Like
A self-service database API is not a bespoke REST service you write and maintain. It’s a configuration-driven layer that reflects your actual schema and enforces your actual access rules, without requiring code.
Here’s what a team actually encounters when using Faucet as that layer:
# Install once, connect to any database
brew install faucetdb/tap/faucet
# Register a connection
faucet connections add prod-reporting \
--driver postgres \
--dsn "postgresql://[email protected]:5432/analytics"
# Start the API server
faucet serve
Within seconds, every table in analytics has a documented, parameterized REST endpoint:
GET /api/prod-reporting/orders?status=shipped&_limit=100
GET /api/prod-reporting/orders/{id}
POST /api/prod-reporting/orders
PUT /api/prod-reporting/orders/{id}
No scaffolding. No code review cycle for the API layer itself. No separate deployment pipeline. The schema is read at startup; the endpoints are live.
That’s the self-service part. But the part that makes it suitable for a platform context is what happens next: access control.
RBAC at the Layer That Matters
Most database access control happens at the credential level: this service account can read this database. That’s coarse-grained and hard to audit. When an AI agent or analytics tool has a read credential, it can read everything that credential can see — all tables, all columns, including the ones with PII.
A governed API layer enforces access at a more precise level.
# Create a role with restricted column access
faucet roles create analyst-read \
--allow "prod-reporting.orders:id,created_at,status,total_amount" \
--allow "prod-reporting.customers:id,region,tier" \
--deny "prod-reporting.customers:email,phone,ssn"
# Issue a token for the analytics team
faucet tokens create --role analyst-read --name "analytics-team"
Now the analytics team has a token that lets them query orders and customers — but they can’t read email addresses or phone numbers even if they try. The API layer enforces it before the query hits the database. No stored procedures needed. No row-level security configuration in Postgres. The policy lives in one place and generates an audit trail.
This is what platform engineering actually needs for database access: not just connectivity, but governed connectivity with auditable policies.
The OpenAPI Contract as a Self-Service Catalog Entry
One of the core platform engineering patterns is discoverability — developers should be able to find what’s available without asking anyone. Internal developer portals built on Backstage achieve this for services, but the database layer is usually a gap.
Every Faucet instance automatically generates an OpenAPI 3.1 spec for every registered connection:
# Pull the spec for the analytics connection
curl http://localhost:8080/openapi/prod-reporting.json
# Or browse it in the built-in UI
open http://localhost:8080/ui
That spec is machine-readable and can be imported directly into Backstage, Swagger UI, Postman, or any OpenAPI-aware tool. It documents every endpoint, every parameter, every response schema — auto-generated from the live database schema with no manual maintenance.
When you have a dozen databases across an organization, each one gets its own spec. A developer onboarding to a new team can open the catalog, find the database they need, and see exactly what data is available and how to query it — without asking the data team, without waiting for a ticket, without needing a read credential to the production database.
The AI Agent Dimension
Platform engineering’s current challenge is accommodating a new class of consumer: AI agents.
Gartner projects that by 2026, more than 30% of the growth in API demand will come from AI tools using language models. Those tools don’t browse internal developer portals the way humans do — they discover and use tools programmatically via MCP (Model Context Protocol).
Faucet ships a built-in MCP server:
# Start with MCP enabled
faucet serve --mcp
# Claude Desktop config
{
"mcpServers": {
"databases": {
"command": "faucet",
"args": ["mcp", "--connection", "prod-reporting"]
}
}
}
The MCP server exposes the same RBAC-controlled, OpenAPI-documented data layer that your human developers use. When an AI agent queries via MCP, it goes through the same access control rules as any other client. The audit log captures it the same way.
This matters for platform engineering teams because it means one governed layer serves both audiences. You don’t need a separate “AI-safe” API and a “developer API” — the same layer, with the same policies, handles both.
The Cognitive Load Argument
Let’s be direct about what the cognitive load problem actually costs.
If a developer needs to query a database they haven’t touched before, here’s the current typical path: find someone with the schema docs (if they exist), request credentials or ask the data team to write the query for them, figure out which ORM or query builder the team uses, understand the connection pooling setup, make sure they’re not accidentally hitting production, write the query, handle pagination, handle errors, and add logging.
That’s half a day of context-switching minimum, and most of it is not the developer’s core competency. It’s infrastructure tax.
The platform engineering answer should be: here’s a URL, here’s a token, here’s the OpenAPI spec, here’s how to query it. Everything else is handled.
That’s what self-service database APIs enable. Not just convenience — a genuine reduction in the cognitive overhead of data access that currently sits outside the scope of most platform engineering initiatives.
The Binary Constraint
One reason database API layers haven’t caught on as a platform primitive is operational friction: they’re typically another service to deploy, monitor, scale, and update. That overhead undercuts the cognitive load argument.
Faucet’s single-binary architecture removes that friction:
# Deploy as a sidecar, a background process, or a managed service
# The binary has no external dependencies — no JVM, no runtime, no package manager
faucet serve --connection prod-reporting --port 8080 --daemon
The binary is ~50 MB. It runs on Linux, macOS, and Windows. It doesn’t need Docker (though it has a Docker image). It doesn’t need a separate database for its own configuration — config lives in a local SQLite file by default.
For platform engineering teams, this means the database API layer can be a package installed on existing infrastructure rather than a new service requiring its own SLA. It integrates into whatever deployment pattern the platform already uses.
What This Looks Like in Practice
A practical implementation for a platform team:
- Catalog all database connections using
faucet connections add— one entry per logical database, pointing at a read-appropriate credential. - Define roles mapping to team access needs:
analyst-read,app-service-write,admin-full. - Generate tokens for each team or service, bound to appropriate roles.
- Publish the OpenAPI specs to the internal developer portal — auto-generated, auto-updated when the schema changes.
- Enable MCP for AI agent access to the same layer.
- Enable the audit log for compliance: every query, every caller, every result.
The entire setup takes under an hour for a typical multi-database environment. The ongoing maintenance is near-zero — schema changes are reflected automatically at startup, and policy changes take effect immediately.
The Gap in the Golden Path
Platform engineering is excellent at creating golden paths for everything except the thing developers spend the most time wrangling: data.
The tools exist to close that gap. Self-service database APIs — governed, OpenAPI-documented, RBAC-enforced, accessible to both human developers and AI agents — are a natural fit for the platform engineering catalog. They reduce cognitive load in the same way that templated CI pipelines and self-service secrets management do.
The difference is that most platform teams haven’t thought of database access as something they can package and govern. They treat it as someone else’s problem — the data team’s problem, or the application team’s problem, or just… a problem.
It doesn’t have to be. One binary, one configuration file, one set of access policies, and your database layer becomes a first-class platform service.
Get started:
brew install faucetdb/tap/faucet
faucet --help
Full documentation at faucet.dev. The binary is free and open-source. Source at github.com/faucetdb/faucet.