Back to Blog

Only 1 in 10 Companies Can Scale Their AI Agents — Here's the Missing Piece

MIT Technology Review found that nearly two-thirds of companies experiment with AI agents, but only 10% scale to production. The bottleneck isn't the model — it's the data infrastructure layer.

Only 1 in 10 Companies Can Scale Their AI Agents — Here’s the Missing Piece

MIT Technology Review published a finding on March 10, 2026 that should alarm every engineering leader investing in AI agents: nearly two-thirds of companies are experimenting with AI agents, but only 10% have successfully scaled them to production.

Read that ratio again. For every ten companies running AI agent pilots, nine are stuck. They have the models. They have the use cases. They have executive sponsorship. And they cannot get past the pilot stage.

The natural assumption is that the bottleneck is model capability — that agents are not smart enough, not reliable enough, not ready. That assumption is wrong. The bottleneck is data infrastructure. Specifically, the inability to give agents structured, reliable, authenticated access to the business data living in relational databases.

The Real Problem: Agents Cannot Reach the Data

AI agents are only as useful as the data they can access. An agent that can reason brilliantly but cannot read your customers table or write to your orders table is a demo, not a product.

The MIT Technology Review article makes this explicit: organizations that succeed at scaling agents are the ones that invested in their data infrastructure layer first. The ones that failed treated data access as an afterthought — something to figure out after the model worked.

Here is what happens in practice. A team builds a proof-of-concept agent. It uses a handful of mock records or a small CSV export. The demo is impressive. Leadership approves production deployment. Then the team discovers that getting the agent to reliably read and write production database data requires building an entire API layer from scratch.

That is where most initiatives stall and die.

The Traditional Approach: Months of Plumbing

Connecting an AI agent to a production database through a proper API layer traditionally requires a stack of engineering work that has nothing to do with AI:

Custom REST API development. For each database table, you need endpoints for create, read, update, and delete operations. A database with 80 tables means hundreds of endpoints. Each one needs input validation, query construction, error handling, and response formatting.

ORM configuration. You need to map database schemas to application models. Every column type, every relationship, every constraint needs a corresponding representation in your application layer. When the schema changes, the ORM mappings need to change too.

Middleware and infrastructure. Authentication, authorization, rate limiting, request logging, connection pooling, query timeouts, pagination, filtering, sorting — none of this comes for free. Each piece is a week of engineering time at minimum.

API documentation. Your agents need machine-readable API specs to know what endpoints exist and how to call them. Writing and maintaining OpenAPI specifications by hand is tedious, error-prone, and almost always out of date within weeks of the initial release.

Schema synchronization. Databases evolve. Columns get added, types change, tables get renamed. Every schema change is a potential breaking change for your API layer, which means it is a potential breaking change for every agent consuming that API. Keeping everything in sync requires discipline that most teams do not have bandwidth for.

Add it up and you are looking at three to six months of dedicated engineering work before a single agent can execute its first real query against production data. For organizations running multiple databases — PostgreSQL for the main application, MySQL for the legacy system, SQL Server for the data warehouse — multiply accordingly.

This is why 90% of companies cannot scale their agents. The model is ready. The data infrastructure is not.

What Agents Actually Need

Strip away the complexity and the requirements are straightforward. An AI agent needs four things from a data layer:

  1. HTTP endpoints over database tables. Agents speak HTTP and JSON. Databases speak SQL. Something needs to bridge that gap with zero ambiguity.

  2. Machine-readable API specifications. Agents cannot read documentation pages. They need OpenAPI specs that describe every endpoint, parameter, and response schema programmatically.

  3. Filtering, pagination, and sorting. Agents operating on production data need to query selectively. Dumping entire tables is not an option. Field-level filtering, result limits, and sort controls are mandatory.

  4. Access control. Production agents need scoped permissions. An agent handling customer support should not have write access to the billing table. Authentication and role-based access control are non-negotiable for production deployment.

Building all of this by hand is the three-to-six-month project described above. But it does not have to be.

Faucet: From Database to Production API in 10 Seconds

Faucet is an open-source tool that eliminates the data infrastructure bottleneck entirely. Point it at any supported database and it generates a complete REST API instantly. No code generation, no ORM configuration, no middleware setup.

Install and run:

brew install faucetdb/tap/faucet
faucet serve --dsn "postgres://user:pass@localhost/mydb"
# That's it. Full REST API at localhost:8080

Two commands. Every table in your database now has a full set of CRUD endpoints with filtering, pagination, sorting, and an auto-generated OpenAPI 3.1 specification.

# Every table gets CRUD endpoints
curl http://localhost:8080/api/customers
curl http://localhost:8080/api/customers?status=active&_limit=100
curl -X POST http://localhost:8080/api/orders -d '{"customer_id": 1, "total": 99.99}'

Your AI agent can discover available endpoints through the OpenAPI spec at /api/_openapi.json, understand the schema of every table, and execute typed queries immediately. No integration sprint required.

Seven Databases, One Binary

Faucet supports seven database backends through a single binary:

  • PostgreSQL — including managed services like Amazon RDS, Cloud SQL, and Supabase
  • MySQL — 5.7+ and 8.x
  • MariaDB — 10.x+
  • SQL Server — 2017+ and Azure SQL
  • Oracle — 12c+ and Oracle Cloud
  • SQLite — embedded databases and local development
  • Snowflake — data warehouse access for analytics agents

Switch the DSN, keep the same API surface. An agent built against Faucet’s API for PostgreSQL works identically against MySQL or SQL Server. The abstraction layer handles dialect differences, type mappings, and query generation transparently.

This matters because real enterprises do not have one database. They have five. The agent that needs customer data from PostgreSQL, inventory data from SQL Server, and analytics from Snowflake can access all three through the same consistent API pattern.

Auto-Generated OpenAPI 3.1

Every Faucet instance automatically generates a complete OpenAPI 3.1 specification that describes every endpoint, every parameter, every request body, and every response schema. This is not a static file that drifts out of sync — it is generated live from the current database schema.

For AI agents, this is the difference between working and not working. An agent can fetch the spec, understand what data is available, and construct valid API calls without any human guidance. OpenAPI 3.1 is the lingua franca of agent-to-API communication, and Faucet generates it as a first-class feature, not an afterthought.

Production Concerns: Security, Stability, and Control

Getting an API running quickly is the easy part. The hard part — the part that keeps the 90% stuck — is making it production-ready. Faucet addresses the three concerns that matter most.

Role-Based Access Control

Faucet includes built-in RBAC that lets you define granular permissions per role, per table, per operation. A customer-facing agent gets read access to the products and orders tables. An internal analytics agent gets read access to everything. A data entry agent gets write access to specific tables only.

Roles are configured declaratively. No custom middleware, no auth service to deploy and maintain. Define the role, assign the API key, scope the permissions.

API Key Authentication

Every agent gets its own API key with its own role assignment. This gives you per-agent audit trails, per-agent rate limiting, and the ability to revoke access to a specific agent without affecting others. When an agent misbehaves — and they will — you can shut it down in seconds.

Schema Contract Stability

This is the one that catches most teams off guard. Your database schema will change. Columns will be added, types will be modified, tables will be renamed. In a hand-built API layer, every schema change is a potential silent breaking change that causes agent failures at runtime.

Faucet’s schema contract locking detects schema changes and prevents them from silently breaking existing API contracts. When a database migration would alter the API surface, you get an explicit notification rather than a mysterious agent failure at 3 AM. This is the difference between a production system and a prototype.

The 10x Acceleration

Return to the MIT Technology Review finding. Nine out of ten companies cannot scale their AI agents to production. The common thread is a data infrastructure gap — months of engineering work to build the API layer that agents need to access business data.

Faucet compresses that timeline from months to minutes. Not by cutting corners, but by automating the mechanical work that was never the hard part to begin with. The hard part is deciding what data your agents should access, what permissions they should have, and what business logic they should execute. Faucet handles the plumbing so your team can focus on those decisions.

The companies in the 10% that successfully scaled their agents figured this out. They did not build faster — they eliminated the infrastructure work that was blocking them.

Get Started

Faucet is open source under the Apache 2.0 license.

# Install
brew install faucetdb/tap/faucet

# Run against your database
faucet serve --dsn "postgres://user:pass@localhost/mydb"

# Or MySQL, SQL Server, Oracle, SQLite, Snowflake, MariaDB
faucet serve --dsn "mysql://user:pass@localhost/mydb"
faucet serve --dsn "sqlserver://user:pass@localhost?database=mydb"

GitHub: github.com/faucetdb/faucet

Your AI agents are ready. Your data infrastructure should be too.