---
id: "concept-explanation-artifact"
type: "concept"
source_timestamps: ["00:10:03", "00:10:30", "00:12:31"]
tags: ["documentation", "proof-of-work", "signaling"]
related: ["framework-5-principles-ai-era", "action-create-explanation-artifacts", "concept-production-comprehension-gap"]
definition: "A structured document accompanying AI-generated work that explicitly details the trade-offs, discarded alternatives, and blast radius, serving as proof of human comprehension."
sources: ["s14-job-market-reality"]
sourceVaultSlug: "s14-job-market-reality"
originDay: 14
---
# Explanation Artifact

## Definition

An Explanation Artifact is a **new class of deliverable** required in the AI era to prove human value. Because the code or product itself can be generated for free by AI, the product is no longer proof of expertise (see [[claim-traditional-signaling-broken]]).

## What it contains

The Explanation Artifact is a structured, plain-English document that travels alongside the shipped work. It explicitly details:

- **What** the work does.
- **Why** specific architectural choices were made.
- **What alternatives** were considered and discarded, and why.
- **Blast radius**: what fails downstream if this code fails.
- **Override points**: where the human deliberately overrode the AI's suggestion.

## What it is NOT

- Not a marketing blog post.
- Not a post-hoc case study written for clout.
- Not a generated summary written by [[entity-claude-d14]] or [[entity-chatgpt-d14]] — humans easily detect 'slop' and realize you lack true comprehension.

Think of it as a **highly detailed, thoughtful Git commit message** — an inseparable part of the deliverable.

## Why it works

The artifact serves as undeniable proof that the human operator actually comprehends the system they are deploying. It closes the [[concept-production-comprehension-gap]] at the level of individual deliverables and creates verifiable proof-of-thought for [[concept-micro-job-transactions]].

## How to produce one

See the action note: [[action-create-explanation-artifacts]]. It is principle #2 and principle #5 of [[framework-5-principles-ai-era]].

## External validation

Supported as best practice in the wider literature. Tools like Amazon Kiro and GitHub Spec Kit are explicitly designed to enforce this pattern — evolving commit messages and specs into detailed trade-off documents. Skeptical-testing subagents document flaws automatically. The platform [[entity-talentboard]] is built around requiring users to produce these artifacts to back up shipped work.


## Related across days
- [[concept-comprehension-gap]]
- [[concept-comprehension-gate]]
- [[concept-taste]]
- [[claim-traditional-signaling-broken]]
