---
id: "framework-device-shift"
type: "framework"
source_timestamps: ["00:07:08", "00:07:16"]
tags: ["paradigm-shift", "compute-history"]
related: ["concept-mainframe-echo", "concept-local-ai-economics", "concept-cloud-ai-economics", "entity-visicalc", "entity-ibm"]
steps_count: 3
sources: ["s19-apple-trillion"]
sourceVaultSlug: "s19-apple-trillion"
originDay: 19
---
# The Device Shift Model

## Summary

A three-step historical framework explaining how compute paradigms transition from **centralized, rented** models to **decentralized, owned** models, unlocking new economic use cases.

## The Three Steps

### Step 1 — Cloud Model
*Compute is centralized, remote, and metered (variable cost). Users rent time/tokens.*

- 1970s analogue: Mainframes owned by [[entity-ibm]], AT&T
- 2020s analogue: Cloud AI APIs (OpenAI, Anthropic, Google) — see [[concept-cloud-ai-economics]]

### Step 2 — Local Chip
*High-performance compute hardware is miniaturized and sold directly to the user (owned compute).*

- 1970s analogue: Apple II microprocessor
- 2020s analogue: Apple Silicon (M-series, A-series, neural engine)

### Step 3 — Device AI
*Inference leaves the cloud and lands on the device. Marginal cost drops to near zero, enabling continuous, unmetered AI applications.*

- 1970s analogue: [[entity-visicalc]] and the spreadsheet revolution
- 2020s analogue: [[concept-native-ai-apps]] running locally — see [[concept-local-ai-economics]]

## How to Use This Framework

- **Forecasting:** When you see a Step 1 (rented, metered) compute paradigm, look for the conditions that enable Step 2 (the silicon breakthrough).
- **Investing / Building:** Step 3 — the killer app — is where outsized returns accumulate.
- **Strategy:** Incumbents who own Step 1 rarely win Step 3, because their unit economics are inverted (see [[claim-cloud-ai-unprofitable]]).

## Caveats

The analogy is imperfect — see [[concept-mainframe-echo]] for caveats about timeline and frontier-capability gap.
