---
id: "prereq-generative-ai-coding"
type: "prereq"
source_timestamps: ["00:05:41"]
tags: ["llms", "developer-tools"]
related: ["concept-vibecoding", "entity-claude", "entity-chatgpt"]
reason: "Required to understand why 'vibecoding' is a prevalent anti-pattern and why generation speed has outpaced human comprehension."
sources: ["s14-job-market-reality"]
sourceVaultSlug: "s14-job-market-reality"
originDay: 14
---
# Familiarity with Generative AI Coding Workflows

## What you need to know

The speaker assumes the audience is already familiar with how tools like Cursor, GitHub Copilot, [[entity-claude-d14]], or [[entity-chatgpt-d14]] are used to rapidly prompt, generate, and iterate on software code.

## Why it's required

Without this baseline, the argument about [[concept-vibecoding]] and the [[concept-production-comprehension-gap]] cannot land — you have to viscerally understand how fast and frictionless modern AI code generation is to grasp why this creates a *new* class of risk.

## Quick orientation

- LLMs can generate working code from natural-language prompts.
- Iteration cycles measured in seconds, not hours.
- 'Working' often means 'compiles and passes the happy path,' not 'production-safe.'
- The cognitive floor of producing something has collapsed.
