---
id: "concept-shift-in-callers"
type: "concept"
source_timestamps: ["01:41:00", "01:52:00"]
tags: ["ai-agents", "automation"]
related: ["concept-orchestrator-pattern", "claim-agents-primary-callers", "quote-math-doesnt-math", "prereq-agent-tool-calling"]
definition: "The transition from humans manually invoking LLM skills to autonomous agents calling hundreds of skills programmatically during a single run."
sources: ["s43-file-format-agreement"]
sourceVaultSlug: "s43-file-format-agreement"
originDay: 43
---
# The Shift in Skill Callers (Human to Agent)

## Definition

The transition from humans manually invoking LLM skills to autonomous agents calling hundreds of skills programmatically during a single run.

## The Old World vs. The New World

When skills were first introduced (notably in [[entity-product-claude-d43]]), they were primarily invoked by humans typing a command or clicking a button in a chat interface. A human might call a few skills per conversation.

Today the architecture has fundamentally changed: the primary callers of skills are no longer humans, but agents. An autonomous agent can make hundreds of skill calls over the course of a single execution run, dynamically selecting the right tools for the task.

## Why This Matters for Skill Design

As the speaker states in [[quote-math-doesnt-math]]: *"The math just doesn't math for humans."*

This shift necessitates a complete redesign of how skills are written. They can no longer rely on:

- human intuition
- mid-process correction
- ambient context the user happens to remember

Instead skills must be explicitly designed to be **agent-readable**, with strict contracts (see [[concept-skills-as-contracts]]), clear routing signals (see [[concept-description-routing-signal]]), and comprehensive edge-case documentation.

## Related

- [[claim-agents-primary-callers]] — the empirical claim
- [[concept-orchestrator-pattern]] — the architectural successor pattern
- [[claim-agents-lack-recovery]] — why agent-first design demands more rigor
