---
id: "claim-models-not-plateauing"
type: "claim"
source_timestamps: ["00:14:27"]
tags: ["industry-trends", "model-capabilities"]
related: ["contrarian-models-plateauing", "quote-models-not-plateauing"]
speakers: ["Nate B. Jones"]
confidence: "high"
testable: false
sources: ["s45-claude-limit-chatgpt-habit"]
sourceVaultSlug: "s45-claude-limit-chatgpt-habit"
originDay: 45
---
# AI Models Are Not Plateauing — They're Accelerating

## Claim
The narrative that LLM capabilities are **plateauing is wrong**. Nate forcefully calls people pushing this 'liars' (see [[quote-models-not-plateauing]]). Model trajectory remains unambiguously upward; perceived plateaus are an illusion produced by users drowning capable models in bloated, sloppy context.

## Mechanism Behind The Illusion
- [[concept-context-sprawl]] dilutes attention
- [[concept-silent-tax]] eats the context window
- Raw documents (no [[concept-markdown-conversion]]) push the noise floor up
- The model 'looks dumber' when it's actually being starved of clarity

Fix the context (run [[framework-stupid-button-audit]]) and the apparent plateau often disappears — see [[claim-clean-context-cost-reduction]].

## Validation Status (from enrichment overlay)
**Mixed.** 
- *Supports*: o1/o3 chain-of-thought reasoning continues improving; production benchmarks show ongoing capability gains.
- *Counters*: Apple's 'Illusion of Thinking' (2025) shows reasoning models collapse on complex puzzles past ~10–20 steps; Epoch AI (2026) reports diminishing log-linear returns on math/coding scaling.

## Confidence (as Nate states it)
**High** — but **not formally testable** in its categorical form. Best read as: in the *user-perceived* sense, most plateau complaints are context-hygiene problems; in the *frontier-research* sense, real capability ceilings exist on certain task types.

## Conceptual Anchor
Linked to the contrarian framing in [[contrarian-models-plateauing]].
