---
id: "entity-percepta"
type: "entity"
entityType: "organization"
canonicalName: "Percepta"
aliases: []
source_timestamps: ["10:40:00", "11:36:00", "19:17:00"]
tags: ["organization", "startup", "architecture"]
related: ["concept-embedded-deterministic-compute", "contrarian-llms-not-computers"]
sources: ["s49-killed-ram-limits"]
sourceVaultSlug: "s49-killed-ram-limits"
originDay: 49
---
# Percepta

Percepta is a company innovating at the architectural frontier of LLM design.

**Key innovation**: They compiled a **WebAssembly C-interpreter** directly into the **weight matrix** of a standard PyTorch transformer. This allows the model to perform deterministic computation natively, **without external tool calls** — see [[concept-embedded-deterministic-compute]].

The model literally executes C programs through its forward pass, step-by-step, emitting a stack trace as tokens. This is a paradigm shift from 'LLM calls a tool' to 'LLM natively executes deterministic code in its own weights.'

**Other work**: Percepta is also noted for working on **2D attention heads** to reduce attention complexity.

**Strategic relevance**: Their work materializes the contrarian thesis [[contrarian-llms-not-computers]] — namely, that overcoming the probabilistic limits of neural networks requires fundamentally rethinking the architecture, not just bolting on more tool calls.

**Status (per enrichment overlay)**: No canonical URL found in independent searches; likely an early-stage startup, unverified beyond the source extraction.
