On March 24, Oracle announced Deep Data Security for Oracle AI Database 26ai. One week later, Okta revealed that its agent identity management platform ships April 30. In between, Microsoft expanded Sentinel to treat agents as first-class security principals, and Bessemer published data showing that 88% of organizations have already experienced AI agent security incidents.
These are not independent announcements. They are the industry converging on a single realization: AI agents are non-human identities, and your database needs to know the difference between an agent and a human.
The implications for how you expose database access are profound.
The Non-Human Identity Problem
Here is the situation most engineering teams are in today. An AI agent needs to query a database. The team gives it a connection string — maybe a service account with broad SELECT permissions, maybe the same credentials the application server uses. The agent runs queries. Data comes back. It works.
It works until it doesn’t.
The Gravitee State of AI Agent Security 2026 report found that only 22% of organizations treat AI agents as independent, identity-bearing entities. The other 78% use shared credentials, hardcoded API keys, or service accounts originally provisioned for batch jobs in 2019.
This matters because agents behave differently than applications. A traditional application runs predetermined queries in predetermined order. An agent decides what to query based on context. It interprets natural language. It chains operations together. It makes judgment calls about what data to retrieve, how much to retrieve, and what to do with it.
When that agent shares credentials with your application server, every decision it makes inherits the full permission scope of that service account. There is no way to audit what the agent did versus what the application did. There is no way to restrict the agent to a subset of tables without restricting the application. There is no way to revoke the agent’s access without taking the application offline.
This is the non-human identity problem. And in March 2026, the biggest vendors in enterprise infrastructure decided it was time to solve it.
Oracle’s Answer: Identity-Aware Access at the Database Level
Oracle Deep Data Security takes a database-native approach. Instead of solving agent identity at the application layer or the API gateway, Oracle pushes authorization logic into the database engine itself.
The system supports row-level, column-level, and cell-level security policies expressed declaratively in SQL. Access control is decoupled from application logic — the rules live in the database, not in middleware, not in your API code, not in agent configuration files.
Here is what this looks like in practice:
-- Oracle Deep Data Security policy
-- Agent identity 'sales_forecast_agent' can only see
-- revenue data for territories it's assigned to
BEGIN
DBMS_DDS.ADD_POLICY(
object_schema => 'SALES',
object_name => 'REVENUE',
policy_name => 'agent_territory_filter',
identity_name => 'sales_forecast_agent',
filter_expr => 'territory_id IN (
SELECT territory_id FROM agent_assignments
WHERE agent_id = SYS_CONTEXT(''DDS'', ''CURRENT_IDENTITY'')
)'
);
END;
The critical insight: the database knows who is querying. Not which connection pool, not which service account — which specific agent. When sales_forecast_agent runs SELECT * FROM revenue, Oracle filters rows automatically. A different agent with different permissions sees different data from the same table. No application code required.
This is powerful. It is also Oracle-specific, requires Oracle AI Database 26ai, and assumes your agents connect directly to the database.
Okta’s Answer: Every Agent Is an Identity
Okta is approaching the problem from the identity layer rather than the database layer. Their framework, launching April 30 in early access, treats AI agents as first-class entries in the Universal Directory — the same system that manages human users, service accounts, and API keys.
The model answers three questions:
-
Where are my agents? Agent Discovery in Identity Security Posture Management (ISPM) finds shadow AI — agents deployed without security or IT approval. Given that only 14.4% of organizations report all agents going live with full approval, this is not a theoretical concern.
-
What can they connect to? Each agent gets attributed risk classification and ownership. You can see which databases, APIs, and services an agent has credentials for.
-
What can they do? Security policies enforce least-privilege access with time-bound permissions. An agent gets the access it needs for the duration it needs it, with a complete audit trail.
The Okta approach is database-agnostic. It works with PostgreSQL, MySQL, SQL Server, Oracle, or any other data source. But it operates at the identity layer — it controls whether an agent can reach your database, not what it sees once connected.
Microsoft’s Answer: Unified Agent Security Context
Microsoft expanded Sentinel in March 2026 to unify context, automate end-to-end workflows, and standardize access governance across agentic deployments. Their guidance explicitly recommends treating every agent as an identity with its own credentials, permissions, and audit trail.
The Microsoft approach integrates with Entra ID (formerly Azure AD) and spans the full agent lifecycle: provisioning, credential rotation, access reviews, anomaly detection, and decommissioning. Like Okta, it is infrastructure-agnostic. Like Oracle, it acknowledges that agents need granular, identity-aware access control.
Three vendors. Three different layers. All arriving at the same conclusion in the same month.
The Gap Between Identity and Access
Here is the problem none of these solutions fully address on their own.
Oracle Deep Data Security gives you row-level, column-level filtering — but only on Oracle databases. If you run PostgreSQL, MySQL, or SQL Server, you need a different approach for each engine.
Okta tells you which agents exist and what they can connect to — but once an agent has database credentials, Okta does not control which tables it queries or which columns it reads.
Microsoft Sentinel provides governance and monitoring — but it is an observability and policy layer, not an access control enforcement point at the query level.
The gap is the API layer. The thing that sits between agent identity (who is this?) and database access (what can they see?). The thing that translates identity-level permissions into query-level filtering, regardless of which database engine is underneath.
This is not a new architectural pattern. It is the same pattern that REST APIs have served for fifteen years. But the requirements have changed.
What Agent-Aware Database Access Actually Requires
When a human user hits your API, the request pattern is predictable. They authenticate, they make a few calls, the session ends. Rate limiting is straightforward. Permission scopes are static.
When an AI agent hits your API, the pattern changes:
Volume: An agent might make hundreds of API calls in a single task. A customer support agent resolving one ticket might query the orders table, the returns table, the customer profile, the shipping status, and the product catalog — in sequence, automatically, without human intervention.
Dynamism: The agent decides what to query based on context. You cannot predict the access pattern at provisioning time. The same agent might need read access to five tables today and twelve tables tomorrow, depending on what it is asked to do.
Chaining: Agents chain operations. The output of one query becomes the filter for the next. An agent might SELECT order_id FROM orders WHERE status = 'delayed', then use those IDs to query shipment tracking, then use tracking data to query carrier APIs. Each step escalates the data surface area.
Autonomy: There is no human reviewing each query before it executes. The agent decides, the agent executes, the agent returns results. If the permissions are wrong, the data leaks before anyone notices.
An agent-aware API layer needs to handle all four of these properties. Here is what that looks like with Faucet:
# Create a role for the forecasting agent
# Read-only access to specific tables and columns
faucet role create forecast_agent \
--tables orders:read,products:read,inventory:read \
--columns orders.customer_email:deny \
--columns orders.payment_info:deny \
--rate-limit 1000/hour
# Create an API key bound to that role
faucet apikey create \
--role forecast_agent \
--name "Q2 Forecast Agent" \
--expires 2026-06-30
Now the forecasting agent can query orders, products, and inventory — but cannot see customer emails or payment information. It is rate-limited to 1,000 requests per hour. Its API key expires at the end of Q2. And every request is logged with the agent’s identity, not a generic service account.
# The agent queries through the REST API
# Faucet enforces role permissions automatically
curl -H "X-API-Key: faucet_abc123" \
"https://api.example.com/api/orders?status=delayed&fields=order_id,product_id,ship_date"
# Column-level filtering is automatic
# customer_email and payment_info are never returned
# even if the agent explicitly requests them
This works the same way whether the underlying database is PostgreSQL, MySQL, SQL Server, Oracle, or SQLite. The identity-to-permission mapping lives in the API layer, not in database-specific security policies.
MCP Makes This Mandatory
Here is why this matters right now and not six months from now.
MCP — the Model Context Protocol — crossed 97 million monthly SDK downloads in March 2026. Every major AI provider ships MCP-compatible tooling. The 2026 MCP roadmap prioritizes remote server infrastructure: Streamable HTTP transport, OAuth 2.1 authentication, stateless server architectures that work with load balancers.
When MCP servers were local processes running on developer laptops, agent identity was an academic concern. The agent ran in the same security context as the developer. Whatever the developer could access, the agent could access.
Remote MCP servers change this fundamentally. An MCP server running as a shared service — accessible by multiple agents, multiple users, multiple teams — needs to know who is calling. Not which HTTP client. Which agent, operating on behalf of which user, with which permissions.
Faucet ships as both a REST API server and an MCP server from the same binary. The RBAC system applies uniformly to both interfaces:
# Start Faucet with both REST and MCP interfaces
faucet serve --db postgres://localhost/mydb --mcp
# The same role-based access control applies to both
# REST: curl -H "X-API-Key: ..." /api/orders
# MCP: agent connects via MCP, authenticates, same permissions
Tool annotations in the MCP spec — readOnlyHint, destructiveHint, idempotentHint — give agents metadata about what operations are safe to execute without confirmation. Combined with role-based access control, you get a layered security model: the agent knows what the operation does, and the API layer controls whether this specific agent is allowed to do it.
The Three-Layer Model
What March 2026 made clear is that agent-aware database access requires three layers working together:
Layer 1: Identity (Okta, Entra ID) Who is this agent? Who owns it? When was it provisioned? What is its risk classification? This layer answers existence and governance questions.
Layer 2: API (Faucet, API gateways) What can this agent access? Which tables, which columns, which operations? At what rate? With what audit trail? This layer enforces access control at the query level, across any database engine.
Layer 3: Database (Oracle Deep Data Security, PostgreSQL RLS) For environments that need defense in depth, database-level policies provide a final enforcement boundary. If the API layer fails or is bypassed, the database still filters data by identity.
Most organizations will start with Layer 2 — it is the fastest to implement and works across all databases. Layer 1 becomes important as you scale past a handful of agents. Layer 3 is for regulated industries where defense in depth is not optional.
What to Do This Week
If you are running AI agents that access databases today — and statistically, you probably are — here are three concrete steps:
1. Inventory your agent credentials. How many agents have database access? What credentials do they use? Are any sharing credentials with production applications? If you cannot answer these questions, Okta’s Agent Discovery (launching April 30) or a manual audit is step one.
2. Put an API layer between your agents and your data. Stop giving agents raw database credentials. A governed API layer with role-based access control eliminates the largest class of agent security incidents.
# Install Faucet and stand up a governed API in 30 seconds
curl -fsSL https://get.faucet.dev | sh
faucet serve --db postgres://localhost/mydb
3. Define per-agent roles with least-privilege access. Each agent gets its own API key, its own role, its own permission set. No shared credentials. No overprivileged service accounts. Column-level filtering for sensitive data.
The vendors are moving. Oracle, Okta, and Microsoft all launched agent identity products in the same month. The protocol infrastructure is ready — MCP at 97 million installs, A2A adopted by Adobe, SAP, and Microsoft. The gap is the API layer between identity and data.
Getting Started
Faucet turns any database into a secured, agent-ready REST API and MCP server in a single binary. Role-based access control, column-level filtering, rate limiting, and full audit logging — built in, not bolted on.
# Install Faucet
curl -fsSL https://get.faucet.dev | sh
# Connect to your database and start serving
faucet serve --db postgres://user:pass@localhost/mydb
# Your database is now accessible via REST API and MCP
# with RBAC, rate limiting, and audit logging
Works with PostgreSQL, MySQL, SQL Server, Oracle, SQLite, and Snowflake. One binary. No runtime dependencies. No infrastructure to manage.
The agents are already talking to your database. The question is whether your database knows who it’s talking to.