---
id: "prereq-vector-embeddings"
type: "prereq"
source_timestamps: ["00:13:35"]
tags: ["machine-learning", "data-structures"]
related: ["concept-semantic-search"]
reason: "Required to understand why traditional folder-based note apps fail and why pgvector is necessary."
sources: ["s22-saas-replacement"]
sourceVaultSlug: "s22-saas-replacement"
originDay: 22
---
# Understanding of Vector Embeddings

## Prerequisite

A basic working understanding of **vector embeddings**: that text can be converted into a high-dimensional array of numbers such that conceptually similar texts produce mathematically nearby vectors.

## Why It's Required

Without this mental model, the rest of the talk is unmotivated. You cannot understand:

- Why [[concept-semantic-search]] beats keyword search.
- Why [[entity-pgvector]] specifically (rather than vanilla [[entity-postgresql]]) is the storage choice.
- Why folder hierarchies are the wrong abstraction for an AI agent.
- Why the [[concept-agent-web]] is a meaningfully different paradigm from the Human Web.

## Minimum Bar

If you can answer 'what does it mean for two pieces of text to be near each other in vector space?' you have enough to follow along.
