---
id: "question-liability-dark-code"
type: "open-question"
source_timestamps: ["00:07:56", "00:08:12"]
tags: ["compliance", "legal"]
related: ["concept-distributed-authorship", "concept-dark-code", "contrarian-yolo-liability"]
resolutionPath: "Establishment of new legal precedents and updated compliance frameworks (like SOC2) that explicitly address AI-generated software and mandate human comprehension gates."
sources: ["s23-amazon-16k-engineers"]
sourceVaultSlug: "s23-amazon-16k-engineers"
originDay: 23
---
# Who Holds Liability for Dark Code Failures?

## The Question

As non-engineers (PMs, marketers) push AI-generated code into production via [[concept-distributed-authorship]], traditional lines of accountability blur. **If [[concept-dark-code]] causes a massive data breach or violates SOC2, who within the organization holds ultimate liability when no human actually understood the code?**

## Why It's Hard

- The **author** is an AI, not a person legally accountable.
- The **prompter** may be a non-engineer who didn't review the output.
- The **engineer** who 'merged' it may not have been required to comprehend it.
- The **CTO** signed off on adopting the AI tool but did not review specific PRs.

No existing compliance framework cleanly assigns liability across this chain.

## Resolution Path (Speculative)

Progress will likely require:

1. New legal precedent — court cases that establish accountability conventions for AI-generated software.
2. Updated compliance frameworks (SOC2, HIPAA, ISO 27001) that explicitly require [[concept-comprehension-gate]]-style review with attributable human sign-off.
3. Regulatory action — note the EU AI Act already classifies risk levels based on benchmark performance (per the enrichment overlay).

## Why It Matters Strategically

The speaker frames this as the central reason [[contrarian-yolo-liability]] is correct. Distributed authorship today is a free option; once liability case law catches up, organizations carrying high dark-code exposure will face retroactive risk.
