---
id: "prereq-agent-context-windows"
type: "prereq"
source_timestamps: ["00:12:50"]
tags: ["ai-literacy"]
related: ["concept-ambient-agent-memory", "entity-chronicle"]
reason: "Explains the technical necessity behind ambient memory solutions like Chronicle."
sources: ["s03-apps-no-api"]
sourceVaultSlug: "s03-apps-no-api"
originDay: 3
---
# Familiarity with LLM Context Windows

## Why You Need This

Large Language Models have a **finite context window** — a maximum number of tokens they can read at once. Every screenshot, every log line, every prior message competes for that fixed budget.

## Implications for Agents

- Long-running agent sessions will **exceed the window** quickly if they try to remember everything raw.
- Naive solutions (e.g. dumping the entire screen-capture log every turn) are token-hungry and expensive.
- Practical agents need a **summarization/memory layer** between raw observations and the LLM input.

## How This Maps to the Video

[[concept-ambient-agent-memory]] — and specifically [[entity-chronicle]] — exists precisely because of this constraint. Screenshots are processed server-side, then **distilled into local Markdown files** that fit the agent's context budget when needed.

Without this prerequisite, Chronicle looks like spyware. With it, Chronicle looks like the obvious architectural answer to a real engineering constraint (and a privacy problem worth weighing seriously — see [[open-question-privacy-laws]]).

