---
id: "framework-ai-failure-taxonomy"
type: "framework"
source_timestamps: ["00:13:06", "00:16:05"]
tags: ["troubleshooting", "taxonomy"]
related: ["concept-failure-pattern-recognition", "concept-context-degradation", "concept-specification-drift", "concept-sycophantic-confirmation", "concept-tool-selection-error", "concept-cascading-failure", "concept-silent-failure"]
sources: ["s42-job-market-split"]
sourceVaultSlug: "s42-job-market-split"
originDay: 42
---
# AI Failure Pattern Taxonomy

## What this framework is

A classification of the six distinct ways AI agents fail in production, which differ fundamentally from human failure modes. Recognizing these patterns is essential for diagnosing and fixing broken agentic systems — that is, for the [[concept-failure-pattern-recognition]] skill.

## The six failure modes

1. **[[concept-context-degradation]]** — Output quality drops as context window pollutes.
2. **[[concept-specification-drift]]** — Agent forgets original instructions over long tasks.
3. **[[concept-sycophantic-confirmation]]** — Agent agrees with incorrect user data.
4. **[[concept-tool-selection-error]]** — Agent uses the wrong external tool or API.
5. **[[concept-cascading-failure]]** — Unverified errors propagate through multi-agent chains.
6. **[[concept-silent-failure-d42]]** — Plausible output masks an underlying execution error.

## Adjacent psychological failure mode

[[concept-confidently-wrong]] is the *human-side* of the failure spectrum — a perception error that lets several of the above (especially silent failure) go undetected.

## Diagnostic discipline

When a multi-agent system misbehaves, the practitioner should:

1. Identify *which* of the six modes is firing.
2. Trace it to a specific architectural cause (context window, prompt, tool registry, hand-off contract, dataset).
3. Add an evaluation case ([[action-build-eval-harnesses]]) that catches it in regression.
