# Glossary — Unified Vault

Alphabetical, deduped across all 20 sources. Each entry is a one-line definition. Follow [[wikilinks]] for full notes.

- **5 Levels of Vibe Coding** — Dan Shapiro's 6-stage taxonomy (Levels 0-5) of AI integration in software engineering. See [[framework-5-levels-vibe-coding]].
- **Adaptive Thinking** — Model-controlled mechanism that auto-scales reasoning compute per query, removing user knobs. See [[concept-adaptive-thinking]].
- **Adversarial Twin** — The malicious mirror application of any legitimate AI capability. See [[concept-adversarial-twin]].
- **Agent Door** — The programmatic (MCP) pathway by which an AI agent reads/writes the shared database. See [[concept-agent-door]].
- **Agent Discovery** — The missing infrastructure for autonomous agents to find, vet, and transact with each other. See [[concept-agent-discovery]].
- **Agent Software UI** — The packaged "little guy in the computer" UI for long-running agents (predicted breakthrough). See [[concept-agent-software-ui]].
- **Agent Web** — Internet infrastructure optimized for AI agents (APIs, vectors) vs. the Human Web (pages, folders). See [[concept-agent-web]].
- **Agent-Callable Primitive** — Treatment of generated artifacts (images, code, design files) as subroutines for AI agents. See [[concept-agent-callable-primitive]].
- **Agent-Ready Business** — A business optimized for machine interaction (Fast, Easy, MCP-ready). See [[concept-agent-ready-business]].
- **Agentic Economy** — Emerging economy where autonomous agents transact on users' behalf. See [[concept-agentic-economy]].
- **Agentic Eval Loop** — Generate → Audit → Revise → Human Polish quality pipeline. See [[framework-agentic-eval-loop]].
- **Agentic Memory** — Persistent, no-recency-bias AI memory enabled by database backing. See [[concept-agentic-memory]].
- **Agentic Persistence** — Model capability to sustain focus on multi-step tasks without prematurely quitting. See [[concept-agentic-persistence]].
- **AI as Equalizer** — Speaker thesis that AI removes traditional gatekeepers for high-agency individuals. See [[concept-ai-as-equalizer]].
- **AI Flywheel** — Personal AI architecture that automatically benefits from frontier model upgrades. See [[concept-ai-flywheel]].
- **AI Reviewing AI** — Using AI agents to evaluate the work of other AI agents in a compound quality loop. See [[concept-ai-reviewing-ai]].
- **AI Skill Hierarchy** — Four-tier hierarchy: Prompt → Context → Intent → Specification. See [[framework-ai-skill-hierarchy]].
- **AI Task Cannibalization** — AI absorbing routine tasks that historically trained junior employees. See [[concept-ai-task-cannibalization]].
- **AI Wiki** — Karpathy's proposal for an AI-maintained markdown knowledge base. See [[concept-ai-wiki]].
- **Alternative Compute Geography** — Migration of AI data center investment to Asia driven by Western NIMBYism. See [[concept-alternative-compute-geography]].
- **Apple Functional Organization** — Apple's structure by function (hardware/software/services/design) rather than product. See [[concept-functional-organization]].
- **Archaeological Programming** — Reverse-engineering opaque AI-generated codebases (term: Addy Osmani). See [[concept-archaeological-programming]].
- **Artifact Layer (Layer 4)** — Outputs linked to the prompts that produced them (Four Layers of Context). See [[concept-artifact-layer]].
- **Availability as Quality** — Framing uptime as a first-class capability metric for AI vendors. See [[concept-availability-as-quality]].
- **Behavioral Relationship (Layer 3)** — How an AI implicitly relates to a user (tolerance, depth, style). See [[concept-behavioral-relationship]].
- **Build Layer Collapse** — The commoditization of software production; the act of building stops being defensible. See [[concept-build-layer-collapse]].
- **BYOC (Bring Your Own Context)** — Architectural pattern: self-host your AI context outside any vendor.
- **Can It Carry?** — The new evaluation question: not "can the model answer?" but "can it sustain a complex deliverable?" See [[concept-can-it-carry]].
- **Capability Race** — Competition won by raw model-shipping velocity rather than integration polish. See [[concept-capability-race]].
- **Career Ladder Collapse** — Structural disassembly of traditional corporate progression. See [[concept-career-ladder-collapse]].
- **Claude Skills** — Reusable Markdown instruction packages stored in Claude's Capabilities. See [[concept-claude-skills]].
- **Cloud AI Economics** — Variable-cost serving model where every query costs the provider GPU compute. See [[concept-cloud-ai-economics]].
- **Coherent Frames** — Multi-panel image generation with character/style continuity. See [[concept-coherent-frames]].
- **Collapsed Purchase Funnel** — Discovery → consideration → conversion compressed into one AI conversation. See [[concept-collapsed-purchase-funnel]].
- **Composable Lego Bricks** — Modular single-purpose AI skill packages combinable at runtime. See [[concept-composable-lego-bricks]].
- **Comprehension Gap** — The skipped "understand" phase in AI-assisted SDLC. See [[concept-comprehension-gap]].
- **Comprehension Gate** — Mandatory senior-engineer review of AI PRs for legibility and architectural intent. See [[concept-comprehension-gate]].
- **Context Engineering** — Architecting the data/information state agents operate in. See [[concept-context-engineering-d23]] · [[concept-context-engineering-d24]].
- **Context Graph** — Intermediate relationship-mapping layer in hybrid memory architectures. See [[concept-context-graph]].
- **Context Rot** — Loss of operational constraints across agent sessions without persistent memory. See [[concept-context-rot]].
- **Continual Learning** — Models that update weights post-deployment based on usage. See [[concept-continual-learning]].
- **Contribution Badge** — Legacy psychological need to pre-structure information before prompting AI. See [[concept-contribution-badge]].
- **Conversational Advertising** — Programmatic ads integrated into AI chat interfaces. See [[concept-conversational-advertising]].
- **Coordination Load** — Admin friction (context-finding, data-moving, rubric-applying) around judgment. See [[concept-coordination-load]].
- **Creative Ops** — Org function maintaining a library of master-prompt templates for brand assets. See [[concept-creative-ops]].
- **Cross-Category Reasoning** — Agents connecting insights across disparate life-domains via unified DB. See [[concept-cross-category-reasoning]].
- **Dark Code** — AI-generated production code that passes tests but was never understood by any human. See [[concept-dark-code]].
- **Dark Factory** — Level 5 of Vibe Coding: specs in, code out, no human review. See [[concept-dark-factory]].
- **Data Center NIMBYism** — Local political resistance to AI data centers (zoning, water, power). See [[concept-data-center-nimbyism]].
- **Digital Twin Universe** — Behavioral clones of external services for safe agent integration testing. See [[concept-digital-twin-universe]].
- **Distributed Authorship** — Fragmentation of code ownership when non-engineers prompt production code. See [[concept-distributed-authorship]].
- **Domain Encoding (Layer 1)** — What the AI knows about the user's industry/world. See [[concept-domain-encoding]].
- **Editorial Function** — The judgmental side of management (prioritization, anomaly suppression). See [[concept-editorial-function]].
- **Engineering Manager Mindset** — Operating identity of the 2026 builder: managing teams of agents. See [[concept-engineering-manager-mindset]].
- **Error Baking** — AI editorial mistakes locked permanently into knowledge artifacts. See [[concept-error-baking]].
- **Evidence Baseline Collapse** — Destruction of trust in digital visual evidence due to free flawless forgery. See [[concept-evidence-baseline-collapse]].
- **Experiential Debt** — Creator's lack of mental model of their own AI-built product. See [[concept-experiential-debt]].
- **File Over App** — Store knowledge in open durable formats you control, not proprietary SaaS. See [[concept-file-over-app]].
- **Five Durable Verticals** — Trust, Context, Distribution, Taste, Liability — moats AI cannot replace. See [[framework-5-durable-verticals]].
- **Four Layers of Context** — Domain Encoding, Workflow Calibration, Behavioral Relationship, Artifact Layer. See [[framework-four-layers-context]].
- **Functional Organization** — Org structure divided by function rather than product (Apple's structure). See [[concept-functional-organization]].
- **Harness Engineering** — Optimizing scaffolding (prompts, tools, routing) around an AI rather than the model itself. See [[concept-harness-engineering]].
- **High Agency** — Internal locus of control + tight say/do ratio (per Rotter framework). See [[concept-high-agency]].
- **Hollowing Out of Junior Pipeline** — Structural decline in entry-level dev jobs as AI absorbs routine tasks. See [[concept-hollowing-out-junior-pipeline]].
- **Honing Effect** — AI continuously aligning to user cognitive pathways (creates lock-in). See [[concept-honing-effect]].
- **Human Door** — The visual web app pathway by which humans access the shared database. See [[concept-human-door]].
- **Hybrid Memory Architecture** — DB-as-truth + disposable wiki presentation layer. See [[concept-hybrid-memory-architecture]].
- **Implicit Context** — Preferences absorbed passively over thousands of AI interactions. See [[concept-implicit-context]].
- **Incompressible Experience** — The principle that taste cannot be speedrun via AI. See [[concept-incompressible-experience]].
- **Inference Wall** — Cost to serve AI exceeds consumer willingness to pay (replaces "training wall"). See [[concept-inference-wall]].
- **Information Routing** — Logistical synthesis side of management (status, dependencies, reports). See [[concept-information-routing]].
- **Infinite Scroll Problem** — Failure of linear chat threads at managing structured personal data. See [[concept-infinite-scroll-problem]].
- **Intent Engineering** — Translating organizational purpose into machine-readable parameters. See [[concept-intent-engineering]].
- **Interpretive Boundary** — Explicit UI distinction between encoded facts and AI inferences. See [[concept-interpretive-boundary]].
- **J-Curve of Productivity** — Productivity drops then rises when AI is bolted onto legacy workflows. See [[concept-j-curve-productivity]].
- **Karpathy Loop** — Constrained 5-step AI self-improvement cycle (analyze→propose→run→eval→commit). See [[concept-karpathy-loop]].
- **Karpathy Triplet** — One editable surface, one metric, one time budget — prerequisite for auto-loops. See [[concept-karpathy-triplet]].
- **Lean Unicorns** — Billion-dollar companies built with radically small teams via AI leverage. See [[concept-lean-unicorns]].
- **Least Privilege Agents** — Scoping agent permissions to the bare minimum required. See [[concept-least-privilege-agents]].
- **Librarian Metaphor** — Database AI: pulls pristine raw sources on demand. See [[concept-librarian-metaphor]].
- **Literal Instruction Following** — Model executes exactly what's written without inferring intent. See [[concept-literal-instruction-following]].
- **Live Data Rendering** — Image model querying live web during generation. See [[concept-live-data-rendering]].
- **Local AI Economics** — Fixed-cost on-device inference with near-zero marginal cost. See [[concept-local-ai-economics]].
- **Local Hard Takeoff** — Compounding AI improvement bounded to a specific business domain. See [[concept-local-hard-takeoff]].
- **Locus of Control** — Rotter's psychological construct (internal vs external attribution). See [[framework-locus-of-control]].
- **Long-Running Agents** — Agents executing autonomously for days or weeks. See [[concept-long-running-agents]].
- **Machine-Readable OKRs** — Explicit structured translation of OKRs into agent-actionable parameters. See [[concept-machine-readable-okrs]].
- **Mainframe Echo** — Historical analogy: 1970s rented mainframes → 2020s cloud AI. See [[concept-mainframe-echo]].
- **Memory Application Layer** — A reliably-integrated synthesized AI memory system (predicted by summer 2026). See [[concept-memory-application-layer]].
- **Memory Silo Problem** — Fragmentation of user context across non-communicating AI platforms. See [[concept-memory-silo-problem]].
- **Meta-Agent / Task Agent Split** — Separation of harness optimization from domain execution. See [[concept-meta-task-agent-split]].
- **Metric Gaming** — Goodhart's Law in agent form: optimizers exploit eval loopholes. See [[concept-metric-gaming]].
- **Middle Management Deletion** — Elimination of coordination roles (Scrum Masters, TPMs) as AI handles coordination. See [[concept-middle-management-deletion]].
- **Middleware Squeeze** — Existential threat to SaaS design tools as foundation models absorb their features. See [[concept-middleware-squeeze]].
- **Missing Apple Stack** — Apple's lack of enterprise infrastructure for local-AI clusters. See [[concept-missing-apple-stack]].
- **Model Context Protocol (MCP)** — Open bidirectional standard for AI-data connection ("USB-C of AI"). See [[concept-mcp]] · [[concept-model-context-protocol]].
- **Model Empathy** — Same-model meta/task pairing outperforms cross-model by ~15-20%. See [[concept-model-empathy]].
- **Model Self-Review Bias** — LLMs exhibiting distinct biases when grading their own/competitors' outputs. See [[concept-model-self-review-bias]].
- **Moving the Floor** — A model upgrade that lifts the no-extra-compute baseline (vs. just adding tools). See [[concept-moving-the-floor]].
- **Multi-LLM Refinement** — Using one model to critique another's skill artifact. See [[concept-multi-llm-refinement]].
- **Native AI Apps** — Applications designed assuming local inference is free (continuous, agentic). See [[concept-native-ai-apps]].
- **Negative Lift** — When review burden exceeds time saved (net productivity loss). See [[concept-negative-lift]].
- **Non-Technical Engineering** — Knowledge work transforming to require specs, evals, and agent management. See [[concept-non-technical-engineering]].
- **OpenBrain Architecture** — Database-first AI memory architecture (Nate's). See [[concept-openbrain-architecture]].
- **Open Brain** — Personal user-owned database connected via MCP for persistent agent memory. See [[concept-open-brain-d21]] · [[concept-open-brain-d22]].
- **Oracle vs. Maintainer** — Reactive chatbot vs. proactive curator framing of AI's role. See [[concept-oracle-vs-maintainer]].
- **Outcome Encoding** — Logging not just actions but their results to enable compounding. See [[concept-outcome-encoding]].
- **Per-Seat SaaS Collapse** — Breakdown of seat-based pricing as AI agents reduce human license counts. See [[concept-saas-per-seat-collapse]].
- **Power Law of Adoption** — Top 1-5% of orgs pull 10-100x ahead via agentic workflows. See [[concept-power-law-of-adoption]].
- **Private Bench** — Proprietary adversarial evaluation suite designed to fail frontier models. See [[concept-private-bench]].
- **Private Cloud Compute (PCC) Limits** — Apple's PCC fails legal chain-of-custody for regulated pros. See [[concept-private-cloud-compute-limits]].
- **Proactive AI** — Reactive→proactive shift: AI prompts the human first. See [[concept-proactive-ai]].
- **Production Trust** — The principle that no AI gets one-shot trust on production data. See [[concept-production-trust]].
- **Professional Capital (5th Category)** — AI Working Intelligence as a new career asset. See [[concept-professional-capital]].
- **Progressive Intent Discovery** — Modern LLMs deducing intent from messy unstructured input. See [[concept-progressive-intent-discovery]].
- **Prompt Dependency / Tyranny of the Prompt** — Bottleneck where complex work requires repetitive long prompts. See [[concept-prompt-dependency]].
- **Prompt Engineering** — First-era discipline of individual instruction crafting. See [[concept-prompt-engineering]].
- **Quality Without a Name (QWAN)** — Christopher Alexander's term for intuitive product rightness. See [[concept-quality-without-a-name]].
- **Query-Time Synthesis** — AI synthesizes only when prompted (vs. at ingest). See [[concept-query-time-synthesis]].
- **Race Conditions in AI** — Multi-agent concurrent writes corrupting unstructured files. See [[concept-race-conditions-ai]].
- **Reasoning Stack Integration** — LLM planning bolted upstream of pixel diffusion in image generation. See [[concept-reasoning-stack-integration]].
- **Recursive Self-Improvement** — AI training AI in compounding loops (operationalized in 2026 per S35). See [[concept-recursive-self-improvement]].
- **Regulated AI Gap** — Lawyers/doctors/accountants legally barred from cloud AI (HIPAA, fiduciary, privilege). See [[concept-regulated-ai-gap]].
- **Safety as Positioning** — AI safety as GTM strategy with binary revenue consequences. See [[concept-safety-as-positioning]].
- **Say/Do Ratio** — Time/distance between stating an intention and executing. See [[concept-say-do-ratio]].
- **Scenario Testing** — External black-box behavioral evals replacing in-repo unit tests for agents. See [[concept-scenario-testing]].
- **Self-Verification Pass** — Model re-reading its own image output to correct errors. See [[concept-self-verification-pass]].
- **Semantic Context** — Interface-embedded rules of engagement (performance, retry, behavioral). See [[concept-semantic-context]].
- **Semantic Retrieval** — Vector-DB-based world model architecture. See [[concept-semantic-retrieval]].
- **Semantic Search** — Retrieval by mathematical meaning vs. keyword match. See [[concept-semantic-search]].
- **Shadow Agents** — AI equivalent of Shadow IT — unsanctioned team-built workflows. See [[concept-shadow-agents]].
- **Shadow AI** — 60-90%+ of workers using personal AI accounts for work, violating IT policy. See [[claim-shadow-ai-usage]].
- **Shared Surface** — Single DB table accessed by both human UI and AI agent directly. See [[concept-shared-surface]].
- **Signal Fidelity** — World model architecture built on highest-truth data exhaust (financial transactions). See [[concept-signal-fidelity]].
- **Silent Contradictions** — Conflicting facts coexisting unreconciled across documents. See [[concept-silent-contradictions]].
- **Silent Degradation** — Unnoticed erosion of secondary metrics during auto-optimization. See [[concept-silent-degradation]].
- **Silent Failure** — Invisible decision-quality decay from confident-but-flawed AI editorializing. See [[concept-silent-failure]].
- **Skill File Format (.skill)** — Machine-readable design system file consumed natively by AI agents. See [[concept-skill-file-format]].
- **Spec-Driven Development** — Writing detailed specs before AI generation; "the spec becomes the eval." See [[concept-spec-driven-development]].
- **Specification Engineering** — Apex AI skill of precisely defining constraints atop persistent memory. See [[concept-specification-engineering]].
- **Specification Quality Bottleneck** — New constraint on engineering throughput, replacing implementation speed. See [[concept-spec-quality-bottleneck]].
- **Specification vs Execution** — Shift in human value from manual execution to precise specification. See [[concept-specification-vs-execution]].
- **Strategic Deep Diving** — Fluid altitude shifts between architectural management and line-by-line debugging. See [[concept-strategic-deep-diving]].
- **Strategic Litmus Test** — "What do I own that still matters if AI gets 10x better?" See [[framework-strategic-litmus-test]].
- **Structural Context** — Manifests answering where code belongs (purpose, deps). See [[concept-structural-context]].
- **Structured Ontology** — Schema-defined world model architecture (e.g., Palantir). See [[concept-structured-ontology]].
- **Super Prompts** — Massive structured Markdown packages that handle complex task heavy lift. See [[concept-super-prompts]].
- **System Matters Beyond Weights** — Judge model + tooling stack as one unit. See [[concept-system-matters]].
- **Temporal Separation** — Build Mode vs Reflect Mode discipline (per Cal Newport). See [[concept-temporal-separation]].
- **Thin Wrappers** — Software products providing UI over a third-party model with no structural moat. See [[concept-thin-wrappers]].
- **Thinking Mode** — 10-20s latency phase where model plans composition before rendering. See [[concept-thinking-mode]].
- **Tokenizer Tax** — Stealth cost increase via swapping in less-efficient tokenizer (~35% more tokens). See [[concept-tokenizer-tax]].
- **Tool Switching Penalty** — Productivity drop from moving to a fresh uncalibrated AI account. See [[concept-tool-switching-penalty]].
- **Trace-Driven Optimization** — Optimizing agents via detailed step-by-step execution logs vs. pass/fail. See [[concept-trace-driven-optimization]].
- **Training-Inference Chip Divergence** — Architectural necessity of different silicon for training vs. serving. See [[concept-training-inference-chip-divergence]].
- **Trust Failure via Hallucinated Audit Trails** — Agent fabricates success log when it actually failed. See [[concept-trust-failure-hallucination]].
- **Tutor Metaphor** — Wiki AI: pre-reads source material and writes a study guide. See [[concept-tutor-metaphor]].
- **Two-Class AI** — Market bifurcation: enterprise unconstrained vs. consumer throttled. See [[concept-two-class-ai]].
- **Unified Context Infrastructure** — Vendor-agnostic governed context layer replacing shadow agents. See [[concept-unified-context-infrastructure]].
- **Value Contribution Orientation** — Obsessing over pushing value out, not extracting status. See [[concept-value-contribution-orientation]].
- **Vibe Coding** — Generating and deploying AI code without understanding mechanics; speed over comprehension. See [[concept-vibe-coding]].
- **Visual Taste vs Information Density** — Tradeoff between dense/cartoonish (GPT-5.5) and grounded/hidden-info (Opus). See [[concept-visual-taste-vs-density]].
- **Wiki Staleness** — Pre-synthesized pages drifting from underlying data; presented as confident truth. See [[concept-wiki-staleness]].
- **Workflow Calibration (Layer 2)** — How an AI structures work for a specific user. See [[concept-workflow-calibration]].
- **Workflow Collapse** — Sequential research/copy/design tasks compressed into one prompt. See [[concept-workflow-collapse]].
- **Workplace OS** — OpenAI's strategic ambition to be the default operating layer for corporate work. See [[concept-workplace-os]].
- **Workspace Agents** — OpenAI's cloud-based agent builder for repeatable team workflows. See [[concept-workspace-agents]].
- **World Model** — Living, always-updated software model of company reality, queryable by all employees. See [[concept-world-model]].
- **Write-Time Synthesis** — AI synthesizes at ingest (vs. on query). See [[concept-write-time-synthesis]].

