---
id: "prereq-agentic-workflows-d44"
type: "prereq"
source_timestamps: ["00:25:05"]
tags: ["automation", "system-design"]
related: ["claim-human-handoffs-bottleneck", "concept-single-eval-gate"]
reason: "Required to understand the bottleneck of human-in-the-loop handoffs."
sources: ["s44-claude-mythos"]
sourceVaultSlug: "s44-claude-mythos"
originDay: 44
---
# Agentic Workflows

## Why this is a prerequisite

The video assumes working knowledge of AI agents — systems where an LLM is given a goal, access to tools, and the ability to execute autonomously.

Without this context, the speaker's points about multi-agent coordination, orchestrators, and removing intermediate evaluation gates ([[concept-single-eval-gate]], [[claim-human-handoffs-bottleneck]]) lack grounding.

## What you should already know

**Core concepts:**
- **Agent loop:** observe → think → act → observe (ReAct, function calling)
- **Tool use:** how LLMs invoke external APIs / functions
- **Memory:** short-term (context window) vs long-term (vector store)
- **Orchestration:** multi-agent systems, planner/executor splits, handoffs
- **Failure modes:** tool errors, hallucinated arguments, infinite loops, error propagation

## Suggested background

- LangChain / LangGraph documentation
- Reflexion (Shinn et al., 2023) — self-correction in agents
- SWE-agent papers — agents on software engineering tasks
- Cognition Labs' Devin demos — end-to-end autonomous coding
- Tools like [[entity-product-cursor-d44|Cursor]] and [[entity-product-factory-ai|Factory.ai]]

## Why the speaker's claim is sharper with this background

If you've watched an agent fail because a human reviewer was AFK for 4 hours, you already understand [[quote-human-bottleneck]]. The contrarian argument [[contrarian-intermediate-testing-degrades]] then becomes legible as an empirical claim, not a vague provocation.
