---
id: "entity-claude"
type: "entity"
entity_type: "product"
source_timestamps: ["01:20:00"]
tags: ["ai-models", "tools"]
related: ["concept-playbooking-method", "framework-playbook-outline", "question-data-privacy"]
canonical_url: "https://www.anthropic.com/claude"
---
# Claude (Anthropic)

## What it is

**Claude** is the family of large language models from **Anthropic**, frequently cited by the speakers as the preferred tool for running playbooks — see [[concept-playbooking-method]] and the [[framework-playbook-outline]].

## Why the summit prefers it

- **Large context windows** (200K+ tokens) — fits a full playbook plus the input data plus the brand-voice guide in one prompt.
- **Strong instruction-following** — executes multi-step playbooks reliably.
- **Brand-voice adoption** — handles tone-matching and style cloning effectively.

## Anthropic's positioning

*"Claude is a family of large language models that power Anthropic's products with helpful, honest, and harmless AI."*

The HHH (helpful / honest / harmless) framing matters for the summit's audience because much of playbooking is about delegating real business communication — where unsafe or off-brand output is costly.

## Open question

The summit gestures at *"proven strategies to protect your data"* without specifying them — see [[question-data-privacy]]. A serious practitioner should consult Anthropic's enterprise documentation on data retention and training opt-out before loading sensitive playbooks.

## Canonical reference

https://www.anthropic.com/claude
