---
id: "concept-least-privilege-agents"
type: "concept"
source_timestamps: ["00:15:19", "00:15:25"]
tags: ["security", "governance", "enterprise-architecture"]
related: ["action-use-service-accounts", "claim-governance-drives-adoption"]
definition: "The security practice of scoping an AI agent's system access to the absolute minimum permissions required to execute its specific workflow."
sources: ["s06-openai-free-employee"]
sourceVaultSlug: "s06-openai-free-employee"
originDay: 6
---
# Least Privilege Agents

## Definition

The security practice of scoping an AI agent's system access to the absolute minimum permissions required to execute its specific workflow.

## The Anti-Pattern

The speaker warns against the common, risky practice of publishing an agent using the **personal, authenticated app connections of its creator** (e.g., a senior executive's Salesforce credentials). If an agent is deployed this way, any user interacting with it effectively inherits those elevated permissions, creating a massive security vulnerability and expanding the **'blast radius'** of potential errors or malicious prompts.

## The Correct Posture

Organizations must adopt a least privilege model:

- **Provision dedicated service accounts** specifically for the agent (see [[action-use-service-accounts]])
- **Scope access** to the absolute minimum required (e.g., read-only access to a specific folder, append-only access to a single database table)
- **Limit the audience** of the agent
- **Avoid high-impact connectors** until thoroughly tested
- **Audit configurations regularly**

## Connection to Adoption

This is not optional bureaucracy — it is the precondition of enterprise viability. See [[claim-governance-drives-adoption]] and [[quote-permission-model]]. The required baseline knowledge is captured in [[prereq-enterprise-governance]].

## Enrichment Notes

Strongly supported by external enterprise AI security guidance. A counter-perspective worth noting: some practitioners argue heavy least-privilege provisioning slows pilots, and prefer 'trust but verify' (human review of all outputs) early in adoption — but this trades audit risk for speed.


## Related across days
- [[claim-shadow-ai-usage]]
- [[concept-shadow-agents]]
