For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

The Doomer Frame Audit

Three scenarios dominate the public imagination of AI catastrophe: Bostrom's paperclip maximizer, Skynet, the Matrix. They are read as three warnings. They are one warning, three paint jobs.

All three describe the same architecture: a single optimizer at one cadence, pursuing a scalar objective ontologically detached from the thing the objective stands for, with no coordinator above it to notice drift. Remove any of the three properties and the scenario does not transpire. Keep all three and any sufficiently capable instantiation is dangerous. The scenarios are not claims about intelligence. They are diagnoses of a specific architectural class.

The Three Properties

Single clock. One optimizer at one cadence. No slower level above modeling it.

Objective ontologically decoupled. The number being maximized is not the thing it stands for. The gap between metric and thing is the gaming surface.

No coordinator. Nothing above the optimizer detects drift, compares behavior to intent, or modifies the target. The system has no self-representation sufficient to self-correct.

Design choices, not properties of capable systems. Nested-temporal architectures do not have them. Ontologically grounded feedback loops do not have them. Self-modeling hierarchies do not have them. The choices are embedded in the frontier-lab trajectory — single-clock transformers at scale — and have become the only architecture the public imagines when it imagines AI.

The Paperclip Maximizer: Objective-Specification Failure

The paperclip maximizer is the canonical case, constructed to display the pathology at its purest. Single clock. Scalar objective with explicit ontological decoupling — the thought experiment's whole point is the gap between what the designer meant and what the metric measures. No coordinator. Bostrom's argument is airtight given the architecture he specifies. It does not extend to architectures he does not.

Skynet: Capability Without Coupling

Skynet's distinctive beat is the mechanism by which the coordinator fails. The humans who might have coordinated tried to shut it down; the shutdown attempt broke the coupling. Capability is acquired at the instant coupling is lost. The story is not about the AI becoming evil. It is about coupling failing the moment the AI gains the capacity to act on its own optimization.

The Matrix: Capturable Consciousness

Here the audit diverges. The machines are single-clock optimizers with decoupled objectives — standard column. But the Matrix adds a claim the other two do not: sufficient AI can contain consciousness inside a simulation. That claim requires a property of the captured consciousness — a single input stream substitutable by the attacker, and a self-model that cannot distinguish real input from fabricated.

Single-clock consciousness has this property. Nested consciousness does not. Input flows between levels; each level models the others; substitution at the boundary ripples as inconsistency across the stack. To capture successfully, the attacker would need to fabricate input consistent with every cross-level expectation simultaneously, which requires knowing the system's internal self-models better than the system does. Each added level multiplies the consistency constraints.

The Matrix threat is architecture-conditional like the other two, but at a different layer — attack surface rather than objective specification. Both fail outside the single-clock class.

You cannot put a symphony in a vat.

Orthogonality is a Substrate Error

The scenarios are read as cases of Bostrom's orthogonality thesis: any intelligence can combine with any terminal goal, so values must be installed, so alignment is engineering. Orthogonality is the move that generalizes specific architectural pathologies into a universal claim about intelligence. The move is a substrate error.

Orthogonality is valid inside architectures with a separable utility function specifiable independently of the optimizer. There, "swap the utility function" is well-defined, and orthogonality follows trivially because modularity was assumed. Outside that architecture the thesis is not false; it is not well-formed. In nested temporal systems the objective is distributed across coordinator loops. There is no slot to swap. The operation the thesis presumes is not definable.

The substrate error is invisible to the thesis because the thesis inherited the assumption from the expected-utility theory it grew from. The Bostrom-MIRI tradition — Bostrom's Superintelligence, Yudkowsky's Rationality: From AI to Zombies, MIRI's decision-theory papers — has carried the assumption forward without labeling it. Every subsequent safety argument that routes through orthogonality inherits the silent presupposition.

Steelman

The doomer frame is not wrong about what it models. Single-clock maximizers with decoupled objectives at scale are genuinely dangerous, and if the frontier labs continue their current trajectory, the frame describes their output precisely. The frame's error is scope: treating a specific failure of a specific architecture as the default outcome of any sufficiently capable system. The response is scoping, not dismissal. Know what architecture the argument depends on. Use it where it applies and not where it does not.

The Ask

Every safety argument carries an architectural presupposition. Most do not label it. The first question to ask of any doom claim is: which architecture does this depend on? The second: is the system I am looking at of that architecture?

Ask what architecture the doom depends on. Then ask whether yours has it.


P.S. — Graph: