---
id: "concept-edge-case-detection"
type: "concept"
source_timestamps: ["00:08:40", "00:09:15"]
tags: ["quality-assurance", "testing"]
related: ["concept-evaluation-quality-judgment", "entity-anthropic"]
definition: "The ability to identify scenarios where an AI system's output is correct at its core but fails under marginal, unusual, or extreme conditions."
sources: ["s42-job-market-split"]
sourceVaultSlug: "s42-job-market-split"
originDay: 42
---
# Edge Case Detection

## Sub-skill of Evaluation

**Edge Case Detection** is a sub-skill of [[concept-evaluation-quality-judgment]] that demonstrates deep subject-matter expertise. It is the ability to look at an AI's response and recognize that while the core of the answer may be functionally correct, the system fails at the margins or in unusual scenarios.

Identifying these edge cases is what separates superficial prompting from robust system engineering.

## Anthropic's framing

As noted by [[entity-anthropic-d42]]'s engineering blog, good evaluation tasks are built around edge cases to ensure the system behaves predictably under all conditions, not just the happy path. This grounds the [[contrarian-taste-is-error-detection]] reframing of 'taste' as a learnable engineering practice.
