---
id: "prereq-basic-llm-understanding"
type: "prereq"
source_timestamps: ["00:04:46", "00:13:08"]
tags: ["foundational-knowledge"]
related: ["concept-context-degradation", "concept-confidently-wrong"]
reason: "Required to understand why AI agents fail differently than traditional deterministic software."
sources: ["s42-job-market-split"]
sourceVaultSlug: "s42-job-market-split"
originDay: 42
---
# Basic LLM Mechanics

## What you need to know

The speaker assumes the audience understands:

- What an **LLM** (large language model) is at a conceptual level.
- What a **context window** is and why it has finite size.
- The basic **probabilistic** nature of how LLMs generate text (next-token sampling, not lookup).

## Why it matters

Without these foundations, concepts like [[concept-context-degradation]] or [[concept-confidently-wrong]] lack technical grounding — they will sound like superstition rather than mechanism. The entire [[framework-ai-failure-taxonomy]] builds on this prerequisite.
