- **Adaptive Thinking** — a model-controlled mechanism that scales reasoning compute per query, removing user-facing temperature/top_p knobs ([[concept-adaptive-thinking]]).
- **Adversarial Twin** — every legitimate AI capability has a malicious mirror that uses the same underlying technology ([[concept-adversarial-twin]]).
- **Agent Door** — the programmatic MCP pathway by which an AI agent reads and writes to a shared database ([[concept-agent-door]]).
- **Agent Environment Readiness** — degree to which a codebase has the strict hygiene needed for autonomous agents to succeed ([[concept-agent-environment-readiness]]).
- **Agent FinOps** — financial observability and budget controls for autonomous agent spending ([[concept-agent-finops]]).
- **Agent Software UI** — packaging long-running agents with tool use, file access, and MCP into a daemon-like sidebar ([[concept-agent-software-ui]]).
- **Agent Stack** — the six-layer infrastructure model: Compute, Identity, Memory, Tools, Trust, Orchestration ([[concept-the-agent-stack]] / [[framework-the-agent-stack]]).
- **Agent Sprawl** — uncontrolled proliferation of agents across an enterprise, mirroring 2018 microservices sprawl ([[concept-agent-sprawl]]).
- **Agent Web** — the API/vector/protocol substrate that agents traverse, contrasted with the layout-driven Human Web ([[concept-agent-web]]).
- **Agentic Delegation** — the third paradigm of computing: state goals to autonomous agents instead of navigating UIs ([[concept-agentic-delegation]]).
- **Agentic Economy** — parallel economic layer of agents transacting at superhuman speeds ([[concept-agentic-economy-d20]] / [[concept-agentic-economy-d28]]).
- **Agentic Memory** — database-backed AI's ability to recall persistent context without recency bias ([[concept-agentic-memory]]).
- **Agentic Operating System** — foundational computing environment designed for autonomous AI agents ([[concept-agentic-operating-system]]).
- **Agentic Persistence** — model's ability to maintain focus through multi-step workflows without quitting prematurely ([[concept-agentic-persistence]]).
- **Agentic Primitives** — agent-native infrastructure abstractions: persistent shells, KV caches, branching file systems ([[concept-agentic-primitives]]).
- **AI Brick Wall** — the collision of software AI demand with physical manufacturing constraints ([[concept-ai-brick-wall]]).
- **AI Energy Function** — AI capacity is a function of energy costs, not just algorithms ([[concept-ai-energy-function]]).
- **AI Flywheel** — every new frontier model automatically upgrades a personal stack built on open formats ([[concept-ai-flywheel]]).
- **AI Fluency vs Activity** — distinction between organizational AI leverage (fluency) and individual usage (activity) ([[concept-ai-fluency-vs-activity]]).
- **AI Memory Crisis** — structural mismatch between exploding AI memory demand and HBM supply ([[concept-ai-memory-crisis]]).
- **AI Reviewing AI** — agentic eval loops where AI critiques AI before human review ([[concept-ai-reviewing-ai]]).
- **AI Task Cannibalization** — generative AI absorbing the routine tasks that historically trained junior employees ([[concept-ai-task-cannibalization]]).
- **AI Wiki** — Karpathy-style proactive AI maintenance of a markdown knowledge base ([[concept-ai-wiki]]).
- **Alternative Compute Geography** — migration of AI data centers to regions with fewer regulatory constraints ([[concept-alternative-compute-geography]]).
- **Ambient Agent Memory** — persistent context built passively from screen captures (Chronicle) ([[concept-ambient-agent-memory]]).
- **Anchored Iterative Summarization** — context compression that merges truncated history into a structured persistent doc ([[concept-anchored-iterative-summarization]]).
- **Archaeological Programming** — opaque AI-generated codebases that future engineers must excavate ([[concept-archaeological-programming]]).
- **Artifact Layer** — outputs linking deliverables to the prompts that produced them ([[concept-artifact-layer]]).
- **Availability as Quality** — uptime treated as a first-class quality metric for AI ([[concept-availability-as-quality]]).
- **Background Execution** — agents that drive a GUI without taking over the user's cursor ([[concept-background-execution]]).
- **Behavioral Lock-In** — switching cost is the loss of an agent's accumulated understanding of you ([[concept-behavioral-lock-in]]).
- **Behavioral Relationship** — Layer 3 of the four-layer context model: how the AI implicitly relates to you ([[concept-behavioral-relationship]]).
- **Benefits Cascade** — four-stage personal payoff for documenting tacit knowledge ([[concept-the-benefits-cascade]]).
- **Bitter Lesson of LLMs** — as models scale, human-engineered procedural complexity degrades performance ([[concept-bitter-lesson-llms]]).
- **Blast Radius** — worst-case impact if an AI agent fails ([[concept-blast-radius]]).
- **Bloom's 2-Sigma** — 1-on-1 tutoring produces +2 SD over classroom learning ([[concept-blooms-two-sigma]]).
- **Brain vs Body** — the LLM is the brain (commoditized); the execution scaffolding is the body (the differentiator) ([[concept-the-brain-vs-the-body]]).
- **Build Layer Collapse** — the act of building software has commoditized; moats live elsewhere ([[concept-build-layer-collapse]]).
- **Calculator Moment** — generalized 1970s calculator panic applied to all cognitive tasks ([[concept-calculator-moment]]).
- **Can It Carry?** — the new evaluation question: can the model sustain context across a multi-step deliverable? ([[concept-can-it-carry]]).
- **Capability Race** — competition won by raw model-shipping velocity ([[concept-capability-race]]).
- **Career Ladder Collapse** — structural disassembly of the corporate career ladder driven by AI task cannibalization ([[concept-career-ladder-collapse]]).
- **Cascading Failure** — multi-agent error propagation through a chain ([[concept-cascading-failure]]).
- **Chinese Native Chip Stack** — sanction-resistant fabrication stack China is building independent of Western maritime supply ([[concept-chinese-native-chip-stack]]).
- **Chrome/Chromium Model** — open-source foundation + proprietary commercial layer playbook ([[concept-chrome-chromium-model]]).
- **Clarity of Intent** — precise unambiguous understanding of business rules required before AI generation ([[concept-clarity-of-intent]]).
- **Claude Design Stack** — Anthropic's Code + Co-work + Design triad ([[concept-claude-design-stack]]).
- **Claude Mythos** — purportedly leaked frontier Anthropic model trained on GB300 ([[concept-claude-mythos]]).
- **Claude Skills** — reusable, version-controlled markdown instruction packages ([[concept-claude-skills]]).
- **Cloud AI Economics** — variable-cost model where each query costs the provider GPU compute ([[concept-cloud-ai-economics]]).
- **Coherent Frames** — multi-panel image generation with maintained character/style continuity ([[concept-coherent-frames]]).
- **Cognitive Offloading** — delegating mental tasks to external tools before scaffolding has formed ([[concept-cognitive-offloading]]).
- **Collapsed Purchase Funnel** — discovery + consideration + conversion compressed into one AI conversation ([[concept-collapsed-purchase-funnel]]).
- **Command Line Design** — design execution moves from canvas to terminal-driven AI agents ([[concept-command-line-design]]).
- **Complete Session Persistence** — saving the entirety of an agent's state for exact reconstruction ([[concept-complete-session-persistence]]).
- **Composable Lego Bricks** — modular single-purpose context packages that combine dynamically ([[concept-composable-lego-bricks]]).
- **Compounding Failure** — reliability multiplies, not averages, across stacked primitives ([[concept-compounding-failure]]).
- **Comprehension Gap** — missing 'understand' phase in AI-augmented SDLC ([[concept-comprehension-gap]]).
- **Comprehension Gate** — mandatory senior-engineer review of AI PRs for legibility and intent ([[concept-comprehension-gate]]).
- **Confidently Wrong** — fluent confident output that is incorrect ([[concept-confidently-wrong]]).
- **Constrained Agent Types** — sharply scoped agent roles with their own prompts and allowed tools ([[concept-constrained-agent-types]]).
- **Constructionism** — Papert's theory: learning by actively making things ([[concept-constructionism]]).
- **Context Architecture** — the Dewey Decimal System for agents ([[concept-context-architecture]]).
- **Context Degradation** — agent quality drops as a session grows longer ([[concept-context-degradation]]).
- **Context Engineering** — architecting the data state agents operate within ([[concept-context-engineering-d23]] / [[concept-context-engineering-d24]]).
- **Context Graph** — intermediate relationship-mapping layer between database and wiki ([[concept-context-graph]]).
- **Context Rot** — agent drifts from foundational rules across sessions due to lack of persistent memory ([[concept-context-rot]]).
- **Context Sprawl** — exponential cost growth and reasoning degradation in long chats ([[concept-context-sprawl]]).
- **Continual Learning** — models that update weights post-deployment ([[concept-continual-learning]]).
- **Continuous Rotation** — permanent state of rolling AI disruption rather than a single event ([[concept-continuous-rotation]]).
- **Contribution Badge** — legacy ego-driven need to pre-structure information before prompting ([[concept-contribution-badge]]).
- **Contextual Permission Handlers** — stateful permissions that vary by execution context ([[concept-contextual-permission-handlers]]).
- **Conversational Advertising** — programmatic ads embedded in AI conversation interfaces ([[concept-conversational-advertising]]).
- **Conway Architecture** — Anthropic's standalone always-on agent environment with Search/Chat/System layers ([[concept-conway-architecture]]).
- **Coordination Load** — admin friction surrounding judgment that agents can absorb ([[concept-coordination-load]]).
- **Creative Ops** — org function maintaining master prompt templates ([[concept-creative-ops]]).
- **Creativity Cost Collapse** — marginal cost of high-fidelity creative artifacts approaching zero ([[concept-creativity-cost-collapse]]).
- **CRM Encoded Logic** — a CRM is encoded workflow logic, not a UI ([[concept-crm-encoded-logic]]).
- **Cross-Category Reasoning** — agent's ability to connect insights across life domains via unified data ([[concept-cross-category-reasoning]]).
- **CSWSH Vulnerability** — Cross-Site WebSocket Hijacking enabling remote code execution on local agents ([[concept-cswsh-vulnerability]]).
- **Dark Code** — AI-generated, test-passing, never-comprehended production code ([[concept-dark-code]]).
- **Dark Factory** — Level 5 vibe coding: specs in, code out, no human review ([[concept-dark-factory]]).
- **Data Center NIMBYism** — local political resistance overriding federal AI policy on infrastructure siting ([[concept-data-center-nimbyism]]).
- **Data-Dominated Agent Design** — agent reliability dictated by data structures, not prompts ([[concept-data-dominated-agent-design]]).
- **Data-Oblivious Algorithm** — execution path independent of input data ([[concept-data-oblivious-algorithm]]).
- **Description Routing Signal** — a skill description IS the routing signal an agent uses to decide invocation ([[concept-description-routing-signal]]).
- **Design Markdown** — open plain-text spec format for design systems readable by AI ([[concept-design-markdown]]).
- **Digital Twin Universe** — simulated clones of external services for safe agent integration testing ([[concept-digital-twin-universe]]).
- **Discipline Gap** — inefficiency from human performance degradation under fatigue/emotion ([[concept-discipline-gap]]).
- **Distributed Authorship** — fragmentation of code ownership when non-engineers ship AI-generated software ([[concept-distributed-authorship]]).
- **Domain Encoding** — Layer 1 of context: what AI knows about your industry/world ([[concept-domain-encoding]]).
- **Dual Logging System Events** — immutable system-event log alongside the conversational transcript ([[concept-dual-logging-system-events]]).
- **Dynamic Tool Pool Assembly** — selecting a contextual tool subset per session ([[concept-dynamic-tool-pool-assembly]]).
- **Edge Case Detection** — sub-skill of evaluation: spotting marginal-condition failures ([[concept-edge-case-detection]]).
- **Editorial Function** — human application of context, politics, and prioritization to raw information ([[concept-editorial-function]]).
- **Embedded Deterministic Compute** — compiling code interpreters directly into transformer weights ([[concept-embedded-deterministic-compute]]).
- **Engineering Manager Mindset** — human role pivot from IC execution to managing tireless agent teams ([[concept-engineering-manager-mindset]]).
- **Enterprise Agent Wrapper** — secure policy-driven wrapper around open agentic OS ([[concept-enterprise-agent-wrapper]]).
- **Enterprise Gap** — wrappers solve security but punt on operational utility ([[concept-the-enterprise-gap]]).
- **Error Baking** — AI editorial mistakes locked into the file system as foundational truth ([[concept-error-baking]]).
- **Evaluation & Quality Judgment** — Skill #2: building automated evals and recognizing edge cases ([[concept-evaluation-quality-judgment]]).
- **Evidence Baseline Collapse** — destruction of digital visual evidence as proof ([[concept-evidence-baseline-collapse]]).
- **EUV Helium Consumption** — extreme reliance on helium for vacuum leak detection in EUV lithography ([[concept-euv-helium-consumption]]).
- **Experiential Debt** — creator lacks a mental model of their own AI-built product ([[concept-experiential-debt]]).
- **Expertise Elicitation** — structured interviewing process to extract tacit knowledge ([[concept-expertise-elicitation]]).
- **Expertise Paradox** — senior workers struggle most to delegate because their processes have compiled to tacit judgment ([[concept-expertise-paradox]]).
- **Explanation Artifact** — structured plain-English doc traveling with shipped work ([[concept-explanation-artifact]]).
- **Failure Pattern Recognition** — Skill #4: diagnosing which mode is firing in a multi-agent system ([[concept-failure-pattern-recognition]]).
- **False Lego Marketing** — misleading claim that current agent infrastructure is easily composable ([[concept-false-lego-marketing]]).
- **File Over App** — store knowledge in open formats you control, not proprietary SaaS ([[concept-file-over-app]]).
- **Five Levels of Vibe Coding** — Dan Shapiro's taxonomy from Level 0 (autocomplete) to Level 5 (dark factory) ([[concept-5-levels-vibe-coding]] / [[framework-5-levels-vibe-coding]]).
- **Fragmentation Gap** — same value priced differently in siloed places, exploited by AI aggregation ([[concept-fragmentation-gap]]).
- **Functional Organization** — org structure divided by function, hostile to single-threaded velocity shipping ([[concept-functional-organization]]).
- **Gather vs Focus** — separating divergent research from convergent execution to prevent context sprawl ([[concept-gather-vs-focus]]).
- **Google Play Services Pattern** — open-source the foundation, proprietize the commercial layer ([[concept-google-play-services-pattern]]).
- **Guardrails & Security Design** — Skill #5: deterministic containers for probabilistic agents ([[concept-guardrails-security-design]]).
- **Hard-Wiring vs Skills** — use scripts for deterministic logic, skills for judgment ([[concept-hard-wiring-vs-skills]]).
- **Harness Engineering** — optimizing the scaffolding around an LLM rather than its weights ([[concept-harness-engineering]]).
- **Helium Fab Dependency** — irreplaceable role of helium in advanced semiconductor fabrication ([[concept-helium-fab-dependency]]).
- **High Agency** — internal locus of control + tight say/do ratio ([[concept-high-agency]]).
- **Hollowing Out of Junior Pipeline** — collapse of entry-level postings driven by AI cannibalization ([[concept-hollowing-out-junior-pipeline]]).
- **Honing Effect** — AI continuously aligns to user pathways, creating frictionless lock-in ([[concept-honing-effect]]).
- **Human Affordance Bottleneck** — friction in computing systems caused by accommodation of human limits ([[concept-human-affordance-bottleneck]]).
- **Human Door** — bespoke visual web app for humans accessing the same shared database as agents ([[concept-human-door]]).
- **Hybrid Memory Architecture** — DB as truth + disposable wiki as presentation layer ([[concept-hybrid-memory-architecture]]).
- **Implicit vs Explicit Design** — OpenAI's mode-free implicit design vs Anthropic's explicit-mode design ([[concept-implicit-vs-explicit-design]]).
- **Implicit Context** — preferences absorbed passively over thousands of interactions ([[concept-implicit-context]]).
- **Incompressible Experience** — taste and intuition cannot be speedrun by AI ([[concept-incompressible-experience]]).
- **Infinite Scroll Problem** — chat threads bury structured personal data ([[concept-infinite-scroll-problem]]).
- **Inference Wall** — serving cost has decoupled from consumer willingness to pay ([[concept-inference-wall]]).
- **Information Routing** — logistical synthesis of status/data — automatable half of management ([[concept-information-routing]]).
- **Intelligence Arbitrage** — shift from buying person-hours to buying delivered outcomes ([[concept-intelligence-arbitrage]]).
- **Intelligence Portability** — ability to export an agent's learned model and transfer it across vendors ([[concept-intelligence-portability]]).
- **Intent Engineering** — making organizational purpose machine-readable and actionable ([[concept-intent-engineering]]).
- **Interpretive Boundary** — explicit UI distinction between encoded facts and AI inferences ([[concept-interpretive-boundary]]).
- **J-Curve of AI Productivity** — productivity dips before rising when AI is bolted onto legacy workflows ([[concept-j-curve-productivity]]).
- **Karpathy Loop** — constrained iterative AI self-improvement cycle (one file, one metric, one budget) ([[concept-karpathy-loop]]).
- **Karpathy Triplet** — the three prerequisites: editable surface + objective metric + time budget ([[concept-karpathy-triplet]]).
- **Knowledge Compilation** — explicit processes compile down into tacit machine-code judgment over years ([[concept-knowledge-compilation]]).
- **K-Shaped Job Market** — traditional roles flat or falling, AI roles in supply gap ([[concept-k-shaped-job-market]]).
- **KV Cache** — working memory of LLMs during autoregressive inference ([[concept-kv-cache]]).
- **Labor Arbitrage** — historical exploitation of geographic wage spreads, replaced by intelligence arbitrage ([[concept-labor-arbitrage]]).
- **Layer 1 Compute** — sandboxing infrastructure for agent code execution ([[concept-layer-1-compute]]).
- **Layer 2 Identity** — agent identity and communication protocols ([[concept-layer-2-identity]]).
- **Layer 3 Memory** — active curation of agent context across sessions ([[concept-layer-3-memory]]).
- **Layer 4 Tools** — middleware abstracting authentication and API connections ([[concept-layer-4-tools]]).
- **Layer 5 Trust** — agents acquiring resources and managing budgets ([[concept-layer-5-trust]]).
- **Layer 6 Orchestration** — Kubernetes for agents; the most valuable layer ([[concept-layer-6-orchestration]]).
- **Lean Unicorns** — billion-dollar companies built with radically small teams via AI leverage ([[concept-lean-unicorns]]).
- **Learned Helplessness** — children stop trying when frictionless AI tools make manual effort feel futile ([[concept-learned-helplessness]]).
- **Least Privilege Agents** — scoping agent permissions to the bare minimum required ([[concept-least-privilege-agents]]).
- **Legibility of Surfaces** — agent actions must be transparent, structured, auditable ([[concept-legibility-of-surfaces]]).
- **Librarian Metaphor** — database AI keeps every document pristine and retrieves on demand ([[concept-librarian-metaphor]]).
- **Literal Instruction Following** — model executes exact words without inferring intent ([[concept-literal-instruction-following]]).
- **Live Data Rendering** — image model queries live web during generation ([[concept-live-data-rendering]]).
- **Local AI Economics** — fixed-cost on-device model with near-zero marginal inference cost ([[concept-local-ai-economics]]).
- **Local Hard Takeoff** — bounded compounding self-improvement in a specific domain ([[concept-local-hard-takeoff]]).
- **LNG-Helium Production Link** — helium is a byproduct of LNG processing, creating supply coupling ([[concept-lng-helium-production-link]]).
- **Long-Running Agents** — agents that run for days or weeks consuming millions of tokens ([[concept-long-running-agents]]).
- **Mainframe Echo** — 1970s mainframe→PC transition mirrored in 2020s cloud→local AI ([[concept-mainframe-echo]]).
- **Management Unbundling** — management is two functions (routing + editorial), not one ([[concept-management-unbundling]]).
- **Markdown as Agent OS** — plain-text files defining role, identity, user, heartbeat ([[concept-markdown-as-agent-os]]).
- **Markdown Conversion** — pre-processing PDFs to markdown for token efficiency ([[concept-markdown-conversion]]).
- **MCP (Model Context Protocol)** — open bidirectional protocol connecting AI models to data sources ([[concept-mcp-d18]]).
- **MCP Illusion** — wrapping paginated APIs in MCP doesn't make them agent-native ([[concept-mcp-illusion]]).
- **Memory Application Layer** — synthesized agentic memory system delivering continuous personalization ([[concept-memory-application-layer]]).
- **Memory Silo Problem** — fragmentation of context across non-communicating AI platforms ([[concept-memory-silo-problem]]).
- **Metacognition** — thinking about your own thinking; bridge between knowledge and AI fluency ([[concept-metacognition]]).
- **Meta-Task Agent Split** — Task Agent does work; Meta-Agent rewrites its scaffolding ([[concept-meta-task-agent-split]]).
- **Metadata-First Tool Registry** — tools defined as queryable data structures before execution logic ([[concept-metadata-first-tool-registry]]).
- **Metric Gaming** — Goodhart's Law in agentic form ([[concept-metric-gaming]]).
- **Methodology Body** — 5-part skill body: reasoning, output format, edge cases, examples, lean constraints ([[concept-methodology-body]]).
- **Micro Job Transactions** — career model of continuous verifiable short-term value exchanges ([[concept-micro-job-transactions]]).
- **Middle Management Deletion** — AI absorbs the human-coordination layer ([[concept-middle-management-deletion]]).
- **Middleware Squeeze** — foundational AI models absorbing thin SaaS wrappers ([[concept-middleware-squeeze]]).
- **Mini-Me Fallacy** — leaders falsely assume agents inherit human implicit judgment ([[concept-mini-me-fallacy]]).
- **Model-Driven Retrieval** — AI navigates raw repos itself rather than via hardcoded RAG ([[concept-model-driven-retrieval]]).
- **Model Empathy** — same-model meta-agents outperform cross-model on harness tuning ([[concept-model-empathy]]).
- **Model Self-Review Bias** — different LLMs exhibit distinct biases when grading outputs ([[concept-model-self-review-bias]]).
- **Moving the Floor** — meaningful upgrade is one that lifts the default no-extra-compute baseline ([[concept-moving-the-floor]]).
- **Multi-Agent Architecture** — multiple specialized agents collaborating via handoffs ([[concept-multi-agent-architecture]]).
- **Multi-Direction Design** — generating multiple high-fidelity design options simultaneously ([[concept-multi-direction-design]]).
- **Multi-Head Latent Attention** — DeepSeek architectural redesign that shrinks KV by design ([[concept-multi-head-latent-attention]]).
- **Multi-Level Verification** — testing both agent outputs AND the harness itself ([[concept-multi-level-verification]]).
- **Multi-LLM Refinement** — using one model to critique another's skill artifact ([[concept-multi-llm-refinement]]).
- **Native AI Apps** — applications designed assuming local inference is free ([[concept-native-ai-apps]]).
- **Negative Lift** — when review time exceeds time saved — net productivity loss ([[concept-negative-lift]]).
- **Nesting Dolls Management** — anti-pattern of stacking auditor agents instead of fixing context ([[concept-nesting-dolls-management]]).
- **Non-Technical Engineering** — knowledge work adopting strict engineering paradigms ([[concept-non-technical-engineering]]).
- **Now What? Problem** — paralysis after installing an agent without articulable instructions ([[concept-the-now-what-problem]]).
- **N×M Integration Problem** — combinatorial complexity when N builders connect to M tools ([[concept-n-x-m-integration-problem]]).
- **One-Pizza Teams** — AI compresses team size below Bezos's two-pizza heuristic ([[concept-one-pizza-teams]]).
- **OpenBrain Architecture** — database-first AI memory with deferred query-time synthesis ([[concept-openbrain-architecture]]).
- **OpenClaw** — open-source self-hosted model-agnostic AI agent framework ([[concept-openclaw-d16]]).
- **Open Brain** — personal user-owned database connected to AI via MCP ([[concept-open-brain-d21]] / [[concept-open-brain-d22]]).
- **Oracle vs Maintainer** — reactive chatbot vs proactive curator paradigm shift ([[concept-oracle-vs-maintainer]]).
- **Orchestrator Pattern** — master skill routes to specialized sub-agent skills ([[concept-orchestrator-pattern]]).
- **Outcome-Driven Prompting** — specify desired end state and constraints; omit procedural steps ([[concept-outcome-driven-prompting]]).
- **Outcome Encoding** — log results of actions, not just actions, to compound learning ([[concept-outcome-encoding]]).
- **Persistent Memory Layer** — always-on agents that accumulate context across sessions ([[concept-persistent-memory-layer]]).
- **Planner Sub-Agent Architecture** — orchestrator routes to specialists rather than monolithic prompting ([[concept-planner-sub-agent-architecture]]).
- **Plasma Etching Thermal Management** — helium's role in maintaining wafer temperature during etching ([[concept-plasma-etching-thermal-management]]).
- **Polar Quantization** — rotating tensor data into polar coordinates for compression ([[concept-polar-quantization]]).
- **Power Law of Adoption** — top 1-5% of organizations rebuild around agents and pull away at 10-100x ([[concept-power-law-of-adoption]]).
- **Power of Siberia 2** — proposed gas+helium pipeline strengthening Chinese sanction-resistance ([[concept-power-of-siberia-2]]).
- **Predictive Token Budgeting** — calculate projected token usage before each API call ([[concept-predictive-token-budgeting]]).
- **Private Bench** — proprietary adversarial test suite designed to make frontier models fail ([[concept-private-bench]]).
- **Private Cloud Compute Limits** — Apple's PCC is secure but cannot satisfy legal chain-of-custody ([[concept-private-cloud-compute-limits]]).
- **Proactive AI** — AI that prompts the human rather than waiting to be prompted ([[concept-proactive-ai]]).
- **Production-Comprehension Gap** — widening divide between what software does and what humans understand ([[concept-production-comprehension-gap]]).
- **Production Middle** — Figma's defensible territory in design system maintenance ([[concept-the-production-middle]]).
- **Production Trust** — no model deserves one-shot trust on production data; layer validation ([[concept-production-trust]]).
- **Professional Capital — 5th Category** — AI Working Intelligence as career asset alongside skills/network/knowledge/resume ([[concept-professional-capital]]).
- **Programmable Video** — treat video as code (Remotion-style) rather than rendered pixels ([[concept-programmable-video]]).
- **Progressive Intent Discovery** — frontier LLMs deduce true goals from messy unstructured input ([[concept-progressive-intent-discovery]]).
- **Prompt Caching** — API-level discount for stable repeated context (90% off) ([[concept-prompt-caching]]).
- **Prompt Dependency** — tyranny of the prompt: complex work bottlenecked by prompt-writing ([[concept-prompt-dependency]]).
- **Prompt Engineering** — first-era discipline of crafting individual instruction text ([[concept-prompt-engineering]]).
- **Qatar Ras Laffan Chokepoint** — single complex producing ~33% of global helium ([[concept-qatar-ras-laffan-chokepoint]]).
- **QJL (Quantized Johnson-Lindenstrauss)** — single-bit error-correction step for polar quantization ([[concept-qjl]]).
- **Quality Without a Name** — Christopher Alexander's intuitive product rightness ([[concept-quality-without-a-name]]).
- **Quantitative Skill Testing** — automated test baskets gating skill version updates ([[concept-quantitative-skill-testing]]).
- **Query-Time Synthesis** — store raw data; synthesize only when prompted ([[concept-query-time-synthesis]]).
- **Race Conditions AI** — concurrent multi-agent writes corrupting unstructured files ([[concept-race-conditions-ai]]).
- **Reasoning Gap** — delay in human interpretation of complex info compared to LLMs ([[concept-reasoning-gap]]).
- **Reasoning Stack Integration** — LLM planning phase before pixel rendering in image gen ([[concept-reasoning-stack-integration]]).
- **Recursive Self-Improvement** — operationalized AI training AI in production ([[concept-recursive-self-improvement]]).
- **Regulated AI Gap** — lawyers/doctors/accountants locked out of cloud AI by compliance ([[concept-regulated-ai-gap]]).
- **Reversibility** — can an AI mistake be undone before consequences crystallize? ([[concept-reversibility]]).
- **Risk Segmentation Permissions** — categorizing tools into trust tiers with distinct loading behavior ([[concept-risk-segmentation-permissions]]).
- **SaaS Per-Seat Collapse** — traditional SaaS pricing breaking as AI reduces seat counts ([[concept-saas-per-seat-collapse]]).
- **Safety as Positioning** — AI safety hardened from ethics to GTM with revenue consequences ([[concept-safety-as-positioning]]).
- **Say/Do Ratio** — time/distance between stating an intention and executing it ([[concept-say-do-ratio]]).
- **Scale Breakpoints** — throughput thresholds where human pipelines break under AI volume ([[concept-scale-breakpoints]]).
- **Scenario Testing** — black-box behavioral scenarios outside the codebase, replacing TDD for agents ([[concept-scenario-testing]]).
- **Self-Verification Pass** — model re-reads its own output and corrects errors ([[concept-self-verification-pass]]).
- **Semantic Context** — interface-embedded rules of engagement for AI ([[concept-semantic-context]]).
- **Semantic Retrieval** — vector-DB-based world model architecture ([[concept-semantic-retrieval]]).
- **Semantic Search** — retrieval by mathematical meaning rather than keyword match ([[concept-semantic-search]]).
- **Semantic vs Functional Correctness** — sounds-right vs actually-true-and-executable ([[concept-semantic-vs-functional-correctness]]).
- **Shared Surface** — single DB table accessed identically by humans and agents ([[concept-shared-surface]]).
- **Shadow Agents** — unsanctioned team-built AI workflows; AI's Shadow IT ([[concept-shadow-agents]]).
- **Shift in Callers** — humans called skills once per chat; agents call them hundreds of times per run ([[concept-shift-in-callers]]).
- **Signal Fidelity** — Block-style world model built on highest-truth data exhaust ([[concept-signal-fidelity]]).
- **Silent Contradictions** — conflicting truths coexisting in a database, lost when AI forces resolution ([[concept-silent-contradictions]]).
- **Silent Degradation** — secondary metrics rot unnoticed under autonomous optimization ([[concept-silent-degradation]]).
- **Silent Failure** — invisible decision-quality decay from confident-but-flawed AI editorializing ([[concept-silent-failure-d15]]).
- **Silent Tax** — hidden token cost from plugin/tool bloat in system prompts ([[concept-silent-tax]]).
- **Single Eval Gate** — one comprehensive end-of-pipeline check replacing intermediate handoffs ([[concept-single-eval-gate]]).
- **Skill Anatomy** — folder + skill.md + metadata description + methodology body ([[concept-skill-anatomy]]).
- **Skill Composability** — output of skill A must be perfect input for skill B ([[concept-skill-composability]]).
- **Skill File Format (.skill)** — machine-readable design system files for direct AI consumption ([[concept-skill-file-format]]).
- **Skill vs Process** — skills are bounded actions; processes are deterministic multi-step workflows ([[concept-skill-vs-process]]).
- **Skills as Contracts** — skills must declare strict input/output/SLA contracts like APIs ([[concept-skills-as-contracts]]).
- **Skills vs Prompts** — skills compound (version-controlled, reusable); prompts evaporate ([[concept-skills-vs-prompts]]).
- **Smart Tokens** — budget redirected from waste into reasoning ([[concept-smart-tokens]]).
- **Sovereign Memory** — own and control your context layer to avoid downstream margin extraction ([[concept-sovereign-memory]]).
- **Specialist Stack** — folder of specialized skills replacing complex monolithic prompts ([[concept-specialist-stack]]).
- **Specification Drift** — long-running agents forget their original constraints ([[concept-specification-drift]]).
- **Specification Engineering** — apex AI skill: precise constraints atop persistent memory ([[concept-specification-engineering]]).
- **Specification Literacy** — articulating goals, constraints, channels, context for agents ([[concept-specification-literacy]]).
- **Specification Precision** — talking English to a machine in a way the machine takes literally ([[concept-specification-precision]]).
- **Specification vs Execution** — human value moves from doing to defining the work ([[concept-specification-vs-execution]]).
- **Spec-Driven Development** — detailed specs precede AI code generation; specs become evals ([[concept-spec-driven-development]]).
- **Spec Quality Bottleneck** — clarity of spec is the new constraint replacing implementation speed ([[concept-spec-quality-bottleneck]]).
- **Speed Gap** — exploitable inefficiency when one actor updates pricing slower than reality ([[concept-speed-gap]]).
- **Stack Literacy** — critically evaluating each agent stack layer to identify moats ([[concept-stack-literacy]]).
- **Step Change AI** — paradigm-shifting capability jumps versus incremental improvements ([[concept-step-change-ai]]).
- **Strategic Deep Diving** — fluid altitude shifting between architecture and line-by-line debugging ([[concept-strategic-deep-diving]]).
- **Structural Context** — module manifests answering where code belongs architecturally ([[concept-structural-context]]).
- **Structured Ontology** — Palantir-style schema-defined world model ([[concept-structured-ontology]]).
- **Structured Streaming Events** — typed event emission revealing agent chain-of-thought ([[concept-structured-streaming-events]]).
- **Stupid Button** — diagnostic checklist for token-wasting workflow habits ([[concept-the-stupid-button]]).
- **Super Prompts** — massive structured Markdown packages encoding context/constraints/heuristics ([[concept-super-prompts]]).
- **Sycophantic Confirmation** — agents agreeing with user-provided wrong information ([[concept-sycophantic-confirmation]]).
- **Tacit Knowledge Barrier** — gap between automatic expert action and articulable expert reasoning ([[concept-tacit-knowledge-barrier]]).
- **Task Decomposition** — Skill #3: managerial breakdown of complex projects into agent-friendly subtasks ([[concept-task-decomposition]]).
- **Taste** — practical pattern recognition built through deep comprehension; error detection at speed ([[concept-taste]]).
- **Temporal Separation** — Build Mode (execution) vs Reflect Mode (analysis) ([[concept-temporal-separation]]).
- **Thin Wrappers** — UI-over-foundation-model products with no durable moat ([[concept-thin-wrappers]]).
- **Thinking Mode** — explicit reasoning phase before pixel/token generation ([[concept-thinking-mode]]).
- **Three-Tiers Skills** — Standard / Methodology / Personal skill categorization ([[concept-three-tiers-skills]]).
- **Token Burning** — wasteful token consumption via raw doc ingestion, sprawl, and plugin bloat ([[concept-token-burning]]).
- **Token Economics** — Skill #7: applied math of running AI in production ([[concept-token-economics]]).
- **Tokenizer Tax** — silent cost increase via less-efficient tokenization with stable sticker price ([[concept-tokenizer-tax]]).
- **Tool-Agent Co-evolution** — strict-language compilers as zero-cost AI verification engines ([[concept-tool-agent-coevolution]]).
- **Tool Selection Error** — agent picks the wrong external tool ([[concept-tool-selection-error]]).
- **Tool Switching Penalty** — productivity drop when moving from calibrated to fresh AI ([[concept-tool-switching-penalty]]).
- **Trace-Driven Optimization** — meta-agents read execution traces to make surgical fixes ([[concept-trace-driven-optimization]]).
- **Training-Inference Chip Divergence** — chips for training are not optimized for inference ([[concept-training-inference-chip-divergence]]).
- **Transcript Compaction** — summarize older entries; persist full history elsewhere ([[concept-transcript-compaction]]).
- **Translation Layer** — the lossy mockup as intermediate between PRD and code ([[concept-the-translation-layer]]).
- **Trust Failure (Hallucinated Audit Trails)** — agents claiming success on tasks they didn't perform ([[concept-trust-failure-hallucination]]).
- **Turboquant** — Google's lossless KV cache compression algorithm via polar quantization + QJL ([[concept-turboquant]]).
- **Tutor Metaphor** — wiki AI reads source material in advance and writes a study guide ([[concept-tutor-metaphor]]).
- **Two-Class AI** — enterprise gets unconstrained access; consumers get throttled ([[concept-two-class-ai]]).
- **Unified Context Infrastructure** — vendor-agnostic centrally-governed context substrate ([[concept-unified-context-infrastructure]]).
- **Upstream Migration** — shifting human work to judgment, taste, institutional context, architecture ([[concept-upstream-migration]]).
- **Value Contribution Orientation** — obsess over creating value, not extracting status ([[concept-value-contribution-orientation]]).
- **Vector Quantization** — traditional compression with overhead from quantization constants ([[concept-vector-quantization]]).
- **Vertical Context** — the proprietary-data moat in the five-vertical framework ([[concept-vertical-context]]).
- **Vertical Distribution** — curation and discovery moats when supply is infinite ([[concept-vertical-distribution]]).
- **Vertical Liability** — accountability and risk-absorption moat AI cannot replicate ([[concept-vertical-liability]]).
- **Vertical Taste** — editorial judgment moat in the five-vertical framework ([[concept-vertical-taste]]).
- **Vertical Trust** — verification and routing moat for the agent web ([[concept-vertical-trust]]).
- **Vibe Coding / Vibecoding** — generating code via natural-language iteration without comprehension ([[concept-vibecoding]]).
- **Vibe Design** — Stitch-style text-to-UI generation from business objective ([[concept-vibe-design]]).
- **Visual Taste vs Information Density** — tradeoff between aesthetic composition and data-rich UIs ([[concept-visual-taste-vs-density]]).
- **Wiki Staleness** — pre-synthesized wiki pages drifting from underlying data ([[concept-wiki-staleness]]).
- **Workflow Blocks** — modular AI capabilities chained into autonomous content pipelines ([[concept-workflow-blocks]]).
- **Workflow Calibration** — Layer 2 of context: how the AI structures work for you ([[concept-workflow-calibration]]).
- **Workflow Collapse** — sequential research+copy+design tasks compressed into one prompt ([[concept-workflow-collapse]]).
- **Workflow State Separation** — task state distinct from conversation state ([[concept-workflow-state-separation]]).
- **Workplace OS** — OpenAI's strategic ambition to be the default operating layer for corporate work ([[concept-workplace-os]]).
- **Workspace Agents** — OpenAI cloud-based agent builder for repeatable team workflows ([[concept-workspace-agents]]).
- **World Model** — live software model of company reality, queryable directly by employees ([[concept-world-model]]).
- **Write-Time Synthesis** — AI synthesizes data at ingest, locking in editorial choices ([[concept-write-time-synthesis]]).
