---
id: "framework-fundamental-loop"
type: "framework"
source_timestamps: ["00:23:31", "00:23:40"]
tags: ["agent-design", "human-in-the-loop"]
related: ["concept-cross-category-reasoning", "concept-agentic-memory", "concept-human-door"]
sources: ["s21-ai-tool-memory"]
sourceVaultSlug: "s21-ai-tool-memory"
originDay: 21
---
# The Fundamental Agent Loop

## Purpose
A conceptual loop describing the **ideal division of labor** between an autonomous agent and a human user within the [[concept-open-brain-d21]] architecture. It emphasizes the agent's role in pattern recognition and the human's role in judgment.

## The Three Phases
1. **Agent Surfaces** — the agent autonomously monitors data, recognizes patterns, and flags insights or conflicts (e.g., an expiring warm intro). This is enabled by [[concept-cross-category-reasoning]] and [[concept-agentic-memory]].
2. **Human Decides** — the human reviews surfaced information via the [[concept-human-door]] visual dashboard and applies *judgment* to make a decision.
3. **Agent Executes** — once the human decides, the agent carries out the resulting tasks or updates the [[concept-shared-surface]] accordingly.

## Why This Loop
- Agents are good at **scanning and recall** — humans are good at **judgment and prioritization**.
- Visual dashboards make the human's decision step fast — sidestepping the [[concept-infinite-scroll-problem]].
- The shared surface ensures that what the agent surfaces and what the human sees are the same data, with no sync lag — see [[claim-no-sync-layer]].
