Securing AI Agents Where They Act: From Static Credentials to Smart Controls
By Guy Yemin
TL;DR: AI agents are moving from being “smart assistants” to becoming real actors in enterprise systems-querying databases, updating records, calling APIs. The risks don’t live in the model, but in how agents connect to real infrastructure. In this post, we’ll explore common pitfalls, safer patterns, and show (with examples) what a secure setup looks like in practice.
Why the Execution Layer Matters
When an AI agent is just answering questions with text, the risks are limited to model hallucinations, bias, or leaking sensitive prompts. But once the same agent can:
It suddenly behaves like a new type of privileged user. That creates four classes of risks at the execution boundary:
This is the new attack surface of AI adoption. If left unmanaged, it risks turning every AI agent into a potential insider threat.
Anti-Patterns: How Not to Do It
Let’s start by looking at common mistakes. These are approaches we’ve already seen in early implementations:
1. Blind pass-through of user credentials: The agent receives the user’s database credentials and uses them directly.
2. One token to rule them all: The MCP server (or orchestrator) holds a powerful, long-lived admin token that it reuses for every request.
3. Losing the user/agent intent: The downstream system (DB, API) sees only the server’s identity. It has no idea who or what triggered the action
4. Minimal or opaque logging: The system logs the prompt but not the actual SQL or API call, nor the background (who, why, when).
Before: Static Credentials and Direct Connections
This is a real-world example of a "before" setup. An MCP server is configured with a PostgreSQL connection string embedded directly in the JSON config.
Risks:
This works in a demo, but in production, it creates a nightmare for security and compliance.
Safer Patterns: What to Do Instead
Instead of treating the agent like a user, treat it like a broker of actions that must always operate within security guardrails.
1. Brokered access (terminate and re-issue)
Never forward user or tenant credentials directly. The MCP server should terminate trust and exchange them for new, short-lived, scoped tokens issued by an identity provider or Security Token Service (STS).
2. Scoped capabilities
Design tokens and service accounts around capabilities, not tenants or admins.
3. Context propagation
Carry forward actor identity and task metadata into every request:
This ensures auditing and downstream policy can be user-aware even though the server executes the call.
4. Rich observability
Log every action at the execution boundary:
This makes it possible to investigate anomalies, enforce compliance, and meet regulatory obligations.
After: Secure Brokered Setup
Here’s what the same setup looks like after moving to a secure, brokered model. Notice the differences:
Benefits:
Instead of hardcoding secrets, the MCP server now uses environment variables pointing to ephemeral credentials. Tokens are short-lived and tied to the session context.
This enforces least privilege, accountability, and revocation by design.
Smarter Queries with Guardrails
Once the connection is secure, you can safely enable higher-value use cases. For example, using Claude (or another agent) to query databases:
Instead of returning full datasets, the agent can apply filters and policies to ensure safe data handling:
This shows the real power: the agent can be a productivity multiplier and a security enforcer-if designed with the right guardrails.
Practical Checklist for Secure AI Agent Access
Identity & Authentication
Authorization & Policy
Credential Hygiene
Observability & Audit
Defense-in-Depth
Example Walkthrough: Annotating a Record Safely
Let’s walk through a realistic flow:
If anything goes wrong, you know exactly who triggered the action, what was done, and why.
The Bigger Picture
Why does this matter? Because AI adoption is accelerating, and agents are becoming gateways to sensitive systems. If we repeat old mistakes-static passwords, overscoped accounts, opaque logging-we’ll create a new wave of breaches.
But if we adopt modern identity-first practices-brokered access, short-lived credentials, contextual logging-we can make AI agents as safe as (or safer than) humans when accessing systems.
The real challenge is not building smarter prompts. It’s building smarter execution controls.
Key Takeaways
With these controls, you don’t have to choose between productivity and security-you get both.