For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

The Irreversibility Premium

The standard critique of catastrophism is that it overweights low-probability scenarios, generating alarm disproportionate to expected harm. A 1% chance of something terrible is, by definition, less probable than a 99% chance of something small. Attention, it follows, should roughly track probability-weighted harm.

This critique is correct for recoverable outcomes and wrong for terminal ones. The distinction is doing most of the work.


Where standard EV reasoning fails

For recoverable outcomes — economic downturns, military setbacks, crises that kill millions but leave the civilization intact — the standard calculus holds. You can afford to underweight tail risks because when they hit, you respond, pay the cost, adapt, update your priors, and try again. The error-correction loop stays open. The cost of a mistake is high but finite, and the system learns from it.

Terminal outcomes break this. A civilization-ending pandemic, a successfully hostile AI transition, the permanent destruction of the institutions that mediate between human conflict and catastrophe — these don't just have very high costs. They close the error-correction loop. There is no next decision. The system that would have updated, adapted, and tried again doesn't survive to do so. The mistake is not just costly; it is the last mistake.

For these outcomes, standard expected-value reasoning gives wrong answers. The formula P × V assumes that the value loss from a bad outcome is comparable in kind to other losses, just larger in magnitude. But outcomes that destroy the mechanism for future value generation aren't just very large losses — they're a different category. They eliminate the possibility of recovery that gives loss its finite character.

The correct weighting for truly terminal scenarios requires what might be called an irreversibility premium: an additional multiplier reflecting not just the magnitude of the outcome but the degree to which it forecloses the ability to respond, learn, and correct. For ordinary risks, this premium is negligible. For civilization-scale, non-recoverable outcomes, it dominates.


The fuzzy terminal case

The sharpest objection to the premium: in practice, outcomes are rarely clearly classifiable as terminal vs. recoverable. Civilization doesn't end — it degrades. Nuclear exchange produces chaos, not neat termination. AI risk might produce severe but not complete loss of human agency. Democratic collapse looks more like slow authoritarian consolidation than a single irreversible event. If the terminal-vs-recoverable distinction is fuzzy in practice, the premium is hard to apply correctly.

This is a real problem, but it doesn't defeat the premium. It complicates its application.

What the fuzzy case suggests: treat irreversibility as a continuous variable, not a binary. Outcomes that are harder to recover from deserve more premium than outcomes that are somewhat hard to recover from. The premium is calibrated to degree of foreclosure, not to a sharp terminal/non-terminal distinction. This still means that scenarios involving severe, persistent reduction in civilizational response capacity — a hostile AI deployment, nuclear exchange among major powers, a pandemic that kills 30% of the population and collapses global supply chains — deserve weighting that exceeds what simple EV suggests, even if they're not technically terminal.

The premium also generates an allocation problem: it doesn't tell you how to prioritize across multiple irreversible scenarios. Jihadist nukes vs. AI risk vs. pandemic vs. democratic collapse all claim irreversibility premia. The premium licenses attention to all of them without providing a ranking. This is a real limitation. It argues for explicit reasoning about which scenarios have the shortest path to irreversible damage, not for ignoring the premium.


Why Sam Harris isn't catastrophizing

Sam Harris's focus on these scenarios — jihadists with nuclear weapons, pandemics worse than COVID, AI risk, the erosion of institutions that mediate between conflict and catastrophe — is often read as catastrophism: a bias toward worst-case scenarios, a kind of intellectual pessimism. The pushback in his conversation with Coleman Hughes: Harris seems to devote "an unusually large percentage of his intellectual energy to the 1 percent chance that something will go catastrophically wrong."

The pushback misidentifies what Harris is doing. He's not treating 1% as if it were 50%. He's applying a different risk calculus to scenarios where the standard calculus fails, and he's pointing at the same thing repeatedly: these aren't purely hypothetical tail scenarios. A serious pandemic already happened. Democratic institutions have already bent under authoritarian pressure. Iran has nuclear ambitions and has already been in military conflict with the US. The tails are arriving.

The crucial asymmetry: response to terminal scenarios requires investment before the tail arrives. Once the pandemic is spreading, once the hostile AI is deployed, once the nuclear weapon has been used — the response window closes. The premium doesn't just tell you to worry more; it tells you to invest in prevention before there's any clear evidence of imminent risk, precisely because "wait for clear evidence" is not a viable strategy for irreversible events.


The competence gap

Harris's position on Iran adds a dimension that applies to irreversibility reasoning generally: you can believe an objective is correct AND believe the executor is incompetent, and the competence question is decision-determining in a way the moral question isn't.

He supports regime change in Iran given the Islamic government's hostility to its own people and to the US. But he expresses deep pessimism about the competence of those executing the strategy. This isn't contradiction. It's the recognition that incompetent execution of a terminal-stakes intervention can make outcomes worse in an irreversible direction. A poorly-executed regime change that produces a more hard-line successor, a collapsed state, or a diffused nuclear program hasn't just failed — it may have created a harder terminal-risk landscape than the original one.

This is the irreversibility premium applied to interventions: the cost of competence failure in a terminal-stakes scenario isn't just "we didn't achieve the objective." It's "we may have closed off better options." The decision calculus for intervening in terminal-stakes situations therefore requires not just "is the objective correct?" but "is the executor capable of achieving the objective without making the terminal risk worse?"

This is a genuinely different question from "is the objective right?" — and it's the one that usually gets skipped.