---
id: "concept-active-level-3"
type: "concept"
source_timestamps: ["01:35:00"]
tags: ["automation", "ai-agents"]
related: ["concept-levels-of-ai-fluency", "concept-playbooking-method", "entity-fathom", "framework-playbook-outline"]
definition: "The distinction between AI that merely observes and summarizes (passive) versus AI that autonomously executes follow-up actions based on those observations (active)."
speakers: ["Rachel Woods"]
---
# Active vs. Passive Level 3 AI

## The distinction

Within the highest tier of AI fluency — see [[concept-levels-of-ai-fluency]] — there is a critical sub-distinction.

### Passive Level 3

AI runs in the background but generates only **static outputs**. The human must still act on them.

*Canonical example:* an AI meeting notetaker like [[entity-fathom]] or Otter.ai — automatically joins a call, generates a transcript and summary. Useful, but you still have to read it, decide, and act.

### Active Level 3

AI takes the output from a passive system and **automatically executes the next steps**.

*Canonical example:* taking a meeting transcript, identifying action items that deviate from the company roadmap, automatically drafting follow-up emails, and updating the project management software. This is where compounding leverage lives.

## What Active Level 3 requires

Active Level 3 is impossible without explicit playbooks — see [[concept-playbooking-method]] and the [[framework-playbook-outline]]. The playbook is what tells the AI *what to do with* the passive output.

## Caveat from enrichment

Independent analyses (e.g., Stanford HAI) caution that Active Level 3 systems can suffer from *agent drift* — hallucinations compounding inside autonomous loops — and that passive systems are currently more reliable. Use Active Level 3 with monitoring and review checkpoints.
