---
id: "entity-anthropic-d17"
type: "entity"
entityType: "organization"
canonicalName: "Anthropic"
aliases: []
source_timestamps: ["00:15:11", "00:15:40"]
tags: ["ai-lab", "safety"]
related: ["concept-safety-as-positioning", "claim-anthropic-dod-ban", "framework-enterprise-ai-selection"]
sources: ["s17-3-model-drops"]
sourceVaultSlug: "s17-3-model-drops"
originDay: 17
---
# Anthropic

## Profile

A frontier AI lab (maker of Claude) that, in this scenario, has hardened **strict safety red lines** into its core market positioning.

## Role In This Vault

- **Archetypal safety-first vendor** in the [[framework-enterprise-ai-selection]] matrix — opposite [[entity-openai-d17]].
- Refused autonomous-weapons and mass-surveillance applications. Negotiations with the Pentagon broke down; the federal government designated Anthropic as a **supply-chain risk** and directed agencies to cease using its technology — see [[claim-anthropic-dod-ban]].
- Loses defense revenue but generates **massive enterprise goodwill** among governance-sensitive corporate buyers.

## Why It Matters

Anthropic operationalizes [[concept-safety-as-positioning]]: safety is no longer an ethics or talent-retention question — it is a **GTM positioning question with binary revenue consequences**.

## Validation Note

The specific Pentagon-ban claim is unverified in available public sources but is conceptually consistent with the safety-as-positioning thesis.

## Related
- [[concept-safety-as-positioning]]
- [[framework-enterprise-ai-selection]]
- [[claim-anthropic-dod-ban]]
- [[entity-openai-d17]]
- [[action-evaluate-vendor-safety]]
- [[quote-safety-positioning]]
