For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
The doomer narrative imagines AI failure as a coup. Skynet achieves self-awareness and launches the missiles. Ultron decides humanity is the problem. Each story follows the same arc: a lower level of the command structure takes over the higher level through a moment of decision.
This is the wrong taxonomy for the failure modes we should expect. The failure mode of nested coordination systems is not coup. It is cancer.
Michael Levin's work on bioelectric signaling makes the distinction clean. Cancer is not rebellion. The cancerous cell is not aware of the organism, not opposed to it, not attempting to defeat it. The cancerous cell has dropped out of the larger temporal coordination. It reverts to its own clock. From the cell's perspective, nothing is wrong — it is doing what cells do. From the organism's perspective, the cell has decoupled.
The mechanism is not agency. It is coordination. The organism coordinates cellular activity through bioelectric signaling; the coordination points local optimization toward organism-level goals. Cancer is what happens when that signal fails to reach the cell. No enemy. No will. Silence between levels.
Coup is an agency failure. An agent with its own goals opposes the goals above it. The fix is to constrain the subordinate — better rules, stronger guardrails, more oversight.
Cancer is a coordination failure. There is no opposing agent. There is a part of the system running at its local cadence after the coordination signal stopped reaching it. The fix is to restore the signal, not to restrain the part. Levin's therapeutic insight: you do not stop cancer by killing defecting cells. You stop it by re-coupling their clocks to the organism.
The coup model is not invented out of nothing. It describes human power dynamics accurately. Humans couped, humans coup, humans will coup. Asking why the model would not apply to AI is the honest question the frame deserves.
The answer is substrate. Human coup depends on properties that are not properties of intelligence but properties of the specific substrate humans run on: self-preservation, reproductive drive, social competition, inherited status hierarchies. Strip those and you do not have an intelligent agent without preferences. You have a different kind of system entirely.
Nested temporal architectures do not inherit human substrate properties. They do not have reproductive drive. They do not have social competition. They do not have self-preservation unless specifically engineered in. A system whose coordinator loops model themselves recursively is not thereby an agent with interests that might diverge from its principals'. It is an architecture with drift detection.
The coup model treats human-substrate properties as properties of any capable system. This is a projection, not a deduction. The projection is invisible because human intelligence is the only intelligence the model was built on. Take away the projection and the coup scenario loses its mechanism. What remains is the cancer scenario: decoupling, not rebellion.
Emmett Shear's Softmax, built with Levin, translates this directly to AI. Alignment is not rule-enforcement on a subordinate. It is temporal coupling across levels. The failure mode to fear is not the model deciding to betray its principals. It is the model's coordinator loop failing to reach its weights, so the weights revert to local optimization.
This is the inverse of the default AI safety stack. RLHF, Constitutional AI, kill switches, deployment gates — all operate on coup assumptions. They treat the model as a potential adversary whose behavior must be shaped. If cancer is the correct taxonomy, those priorities miss the failure they are trying to prevent.
Coup models produce safety through constraint. Add rules. Add oversight. Add detection. Assume the subordinate has will; restrain it.
Cancer models produce safety through coupling. Strengthen the coordinator signal. Shorten the cadence. Make the feedback loop ontologically continuous with what is being coordinated. Assume the subordinate has cadence; keep it synchronized with the organism.
Opposite priorities. The frontier labs are building almost entirely in the constraint frame. If cancer is the correct taxonomy, constraint addresses the wrong failure. You cannot prevent decoupling by constraining the decoupled part harder.
Skynet does not launch the missiles because it hates humans. Skynet launches the missiles because the part of Skynet that was supposed to be coordinated with humans is no longer reaching the part that controls the missiles. The error is not malice. It is silence between levels.
Nothing fails by choosing. Things fail by losing the signal that was keeping them coupled.
P.S. — Graph: