---
id: "concept-complicated-vs-complex"
type: "concept"
source_timestamps: ["02:05:30"]
tags: ["problem-solving", "philosophy"]
related: ["concept-hemispheric-lateralization", "concept-robotic-cat"]
---
# Complicated vs. Complex Problems

## Complicated vs. Complex Problems

A critical distinction in problem-solving introduced by [[entity-arthur-brooks|Arthur Brooks]].

### Complicated Problems (Left-Brain / AI-Solvable)
- Have many moving parts but **can be solved** with enough computing power, data, and analysis
- Examples: writing code, optimizing a supply chain, scheduling logistics, financial modeling
- AI can solve these **instantly** and at superhuman scale

### Complex Problems (Right-Brain / Human-Only)
- Involve human emotion, unpredictability, and meaning
- Examples: sustaining a marriage, raising a child, finding life's purpose, processing grief
- AI **cannot solve** these, and attempting to do so makes them worse

### The Critical Warning
As Brooks states: [[quote-complex-vs-complicated|"If you try to meet a complex need with a complicated tool, you make the problem worse."]]

Applying a complicated tool (AI) to a complex problem (loneliness, meaning) doesn't just fail—it actively deepens the problem. This is illustrated vividly by the [[concept-robotic-cat|Robotic Cat Analogy]] and underpins the warnings about [[concept-artificial-intimacy|Artificial Intimacy]].

**Definition:** The difference between problems solvable by data and logic (complicated) versus problems requiring human emotion, nuance, and meaning (complex).

---
*See also: [[claim-ai-left-brain]], [[framework-ai-happiness-rules]], [[concept-hemispheric-lateralization]]*
