---
id: "claim-take-home-exams-dead"
type: "claim"
source_timestamps: ["00:16:46", "00:16:55"]
tags: ["assessment", "education-policy"]
related: ["claim-ai-detection-impossible", "open-question-assessment-redesign", "action-ban-ai-detectors"]
confidence: "high"
testable: true
speakers: ["Nate B. Jones"]
sources: ["s10-vibe-codes"]
sourceVaultSlug: "s10-vibe-codes"
originDay: 10
---
# Take-Home Assignments Are Functionally Meaningless As Assessments

## Claim

Because AI can perfectly execute almost any take-home cognitive task — essays, research papers, coding assignments — and because [[claim-ai-detection-impossible]] is true, take-home assignments have lost all validity as a measure of student capability.

## What Educators Are Actually Doing

A growing number of college faculty are already redesigning courses entirely around:

- In-class supervised work
- Oral exams and viva-style assessments
- Whiteboard problem-solving
- Process-traced live work

Take-home work can no longer be trusted to reflect the student's own mind.

## Empirical Backing

Educator surveys (Stanford 2024–25) report up to 70% of faculty in some departments shifting to oral or in-class assessment due to undetectable AI use.

## Counter-Perspective

Proctoring platforms (Proctorio, ProctorU) combined with process-tracing keystroke analytics may restore some validity at scale. The counterargument is that take-homes are *salvageable* with sufficient surveillance — at the cost of student privacy and dignity.

## The Linked Action

[[action-ban-ai-detectors]] is the inverse-positive move: stop trying to catch cheating; redesign the assessment so cheating is structurally impossible.

## The Open Question

[[open-question-assessment-redesign]] addresses the scale problem: oral exams are gold-standard but resource-intensive. How does a 500-person lecture handle this?

## Confidence And Falsifiability

High confidence and clearly testable: any institution can measure correlation between take-home grades and proctored-exam grades pre- and post-LLM.
