Securing AI Agents Where They Act: From Static Credentials to Smart Controls

Securing AI Agents Where They Act: From Static Credentials to Smart Controls

By Guy Yemin

TL;DR: AI agents are moving from being “smart assistants” to becoming real actors in enterprise systems-querying databases, updating records, calling APIs. The risks don’t live in the model, but in how agents connect to real infrastructure. In this post, we’ll explore common pitfalls, safer patterns, and show (with examples) what a secure setup looks like in practice.


Why the Execution Layer Matters

When an AI agent is just answering questions with text, the risks are limited to model hallucinations, bias, or leaking sensitive prompts. But once the same agent can:

  • query a database
  • update a record
  • pull data from a SaaS API
  • or even trigger automation in cloud infrastructure

It suddenly behaves like a new type of privileged user. That creates four classes of risks at the execution boundary:

  1. Credentials exposure - static passwords or API keys embedded in config files or prompts.
  2. Overscoped privileges - one “admin” token reused for every agent action.
  3. No accountability - systems can’t distinguish between user, agent, or broker identity.
  4. Weak monitoring - logs capture prompts, but not the actual database queries or API calls.

This is the new attack surface of AI adoption. If left unmanaged, it risks turning every AI agent into a potential insider threat.


Anti-Patterns: How Not to Do It

Let’s start by looking at common mistakes. These are approaches we’ve already seen in early implementations:

1. Blind pass-through of user credentials: The agent receives the user’s database credentials and uses them directly.

  • Why it fails: If the agent is compromised (e.g., via prompt injection), those credentials are exposed. The database sees only the user’s identity, not the fact an agent acted. There’s no chance to scope or revoke per-task.

2. One token to rule them all: The MCP server (or orchestrator) holds a powerful, long-lived admin token that it reuses for every request.

  • Why it fails: Any bug or misprompt turns into a full system compromise. It’s like running every query as root.

3. Losing the user/agent intent: The downstream system (DB, API) sees only the server’s identity. It has no idea who or what triggered the action

  • Why it fails: You lose the ability to enforce per-user policy (row-level filtering, tenant scoping). Auditing becomes meaningless because every action looks the same.

4. Minimal or opaque logging: The system logs the prompt but not the actual SQL or API call, nor the background (who, why, when).

  • Why it fails: When something goes wrong, you can’t answer the basic forensic question: “who did what?”


Before: Static Credentials and Direct Connections

This is a real-world example of a "before" setup. An MCP server is configured with a PostgreSQL connection string embedded directly in the JSON config.

Risks:

  • Credentials exposed in plaintext.
  • No MFA or time-limited access.
  • Hard to rotate credentials securely.
  • Direct AI → DB connection without centralized monitoring and control.
  • Cross-MCP confusion - If multiple MCP servers are configured (e.g., one with SQL DB access), an attacker could exploit the Postgres agent to issue unintended queries against the database.

Article content

This works in a demo, but in production, it creates a nightmare for security and compliance.


Safer Patterns: What to Do Instead

Instead of treating the agent like a user, treat it like a broker of actions that must always operate within security guardrails.

1. Brokered access (terminate and re-issue)

Never forward user or tenant credentials directly. The MCP server should terminate trust and exchange them for new, short-lived, scoped tokens issued by an identity provider or Security Token Service (STS).

  • Each token is purpose-built: correct audience, narrow scopes, strict TTL (minutes, not days).
  • If leaked, blast radius is limited.

2. Scoped capabilities

Design tokens and service accounts around capabilities, not tenants or admins.

  • Example: db.read.invoices or db.write.annotations
  • Default to read-only; escalate to write only with extra controls.
  • Split identities by function (analytics vs operations).

3. Context propagation

Carry forward actor identity and task metadata into every request:

  • Who initiated the action (user ID)?
  • Which agent was involved (agent ID)?
  • What was the task (task ID, purpose)?

This ensures auditing and downstream policy can be user-aware even though the server executes the call.

4. Rich observability

Log every action at the execution boundary:

  • Who: user, agent, broker
  • What: operation performed, query or API endpoint
  • Why: task description or intent
  • When: timestamp
  • Result: success/failure, output hash

This makes it possible to investigate anomalies, enforce compliance, and meet regulatory obligations.


After: Secure Brokered Setup

Here’s what the same setup looks like after moving to a secure, brokered model. Notice the differences:

Benefits:

  • No static passwords in config files.
  • MFA token is valid only for a limited period and tied to a specific IP address
  • All access fully audited and session recorded.
  • Supports Zero Standing Privileges (ZSP).[RG1] 

Instead of hardcoding secrets, the MCP server now uses environment variables pointing to ephemeral credentials. Tokens are short-lived and tied to the session context.

Article content

This enforces least privilege, accountability, and revocation by design.


Smarter Queries with Guardrails

Once the connection is secure, you can safely enable higher-value use cases. For example, using Claude (or another agent) to query databases:

Instead of returning full datasets, the agent can apply filters and policies to ensure safe data handling:

  • Restricting results to a specific tenant, user group, or department.
  • Masking sensitive columns such as PII or financial data.
  • Limiting row counts or query complexity.
  • Detecting anomalies, like an unexpected spike in returned data.

This shows the real power: the agent can be a productivity multiplier and a security enforcer-if designed with the right guardrails.


Practical Checklist for Secure AI Agent Access

Identity & Authentication

  • Treat MCP servers as OAuth/OIDC clients to downstream systems.
  • Use token exchange or on-behalf-of (OBO) flows instead of pass-through.
  • Restrict tokens by audience (specific API) and scope (specific action).

Authorization & Policy

  • Enforce RBAC/ABAC at the broker and downstream system.
  • Default to deny; allow only narrow capabilities.
  • Separate read vs write vs admin roles.

Credential Hygiene

  • Eliminate static secrets from configs.
  • Use short-lived tokens (minutes).
  • Rotate automatically; revoke aggressively.[RG1] [GY2] 

Observability & Audit

  • Log actor, task, action, and result for every call.
  • Hash inputs/outputs if sensitive.
  • Alert on anomalies (e.g., excessive queries, privilege escalation).

Defense-in-Depth

  • Assume prompt injection; validate and sanitize inputs.
  • Segregate environments (dev, staging, prod).
  • Apply monitoring not just at the agent, but at the broker and resource.


Example Walkthrough: Annotating a Record Safely

Let’s walk through a realistic flow:

  1. User request: “Agent, annotate record ABC123 with status = reviewed.”
  2. Agent calls broker: The request goes to the MCP server exposing the annotate_record capability.
  3. Broker validation: The server checks: is this user allowed? Is the agent trusted?
  4. Token issuance: The broker exchanges session credentials for a 2-minute, write-only token scoped to annotate_record.
  5. Database update: The DB sees a call from the broker with context: actor=user123, agent=assistant-v2, task=annotate_record, record=ABC123.
  6. Audit trail: Logs capture the action, token is revoked after TTL expires.

If anything goes wrong, you know exactly who triggered the action, what was done, and why.


The Bigger Picture

Why does this matter? Because AI adoption is accelerating, and agents are becoming gateways to sensitive systems. If we repeat old mistakes-static passwords, overscoped accounts, opaque logging-we’ll create a new wave of breaches.

But if we adopt modern identity-first practices-brokered access, short-lived credentials, contextual logging-we can make AI agents as safe as (or safer than) humans when accessing systems.

The real challenge is not building smarter prompts. It’s building smarter execution controls.


Key Takeaways

  • AI agents change the game by acting, not just reasoning.
  • The biggest risks are at the execution layer: credentials, privileges, and accountability.
  • Avoid anti-patterns like pass-through tokens and static secrets.
  • Adopt brokered, scoped, short-lived access with context propagation.
  • Use observability and guardrails to make agent actions transparent and auditable.

With these controls, you don’t have to choose between productivity and security-you get both.

To view or add a comment, sign in

More articles by CyberArk

Others also viewed

Explore content categories