For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

The Grain-of-Truth Mechanism

The thing that makes partial institutional failures so dangerous isn't the damage they do directly. It's what they do to the feedback loop.

When an institution fails completely — invents its findings, operates entirely in bad faith, produces no accurate outputs — the failure is at least discoverable. You can demonstrate fabrication. There is external ground truth to appeal to. The institution's track record, compared to that ground truth, returns a verdict.

Partial failure is different. The institution failed on this, and this, and this — but not on everything. Iraq WMDs but not Saddam's brutality. The COVID lab-leak hypothesis but not transmission modeling. Epstein's network but not thousands of ordinary cases. The record is real and genuinely mixed.

The rational response to a mixed record is proportional updating: discount the institution's outputs on topics where the failure mode is most relevant, maintain more trust where the track record is better. This is how calibrated reasoning is supposed to handle it.

What actually happens, for a large fraction of the population, turns on a single variable: whether the failure was covered up. A mistake is one thing. A coordinated effort to suppress a true conclusion is another. Once there is evidence of the latter — and Iraq WMDs, COVID origins, and Epstein all involve documented suppression, not just error — the prior rationally shifts from "institution makes mistakes" to "institution actively deceives." These require different models.

The "institution actively deceives" prior is, structurally, unfalsifiable. Any output from the institution that contradicts a conspiracy theory gets reinterpreted: that's exactly what a deceptive institution would produce. Any official denial becomes confirmation. Any credentialed defender becomes a captured one. The theory is no longer in contact with evidence from the institution — which means the institution has lost the only tool it has to correct the prior.

This is the grain-of-truth mechanism: partial, genuine institutional failures seed a prior that cannot be corrected by the failing institution. The grain of truth — the failure that was real and covered up — provides the seed. The mechanism grows it into an unfalsifiable theory.


One clarification the mechanism requires: the "grain of truth" label can be self-serving. Distinguishing genuine partial failures from conspiracy fabrications isn't always easy from the outside — especially while they're contested. Someone inside the unfalsifiable prior will call it "grain of truth" when the seed fits their worldview and "whole-cloth conspiracy" when it doesn't. The mechanism's structural observation (covered-up failures create unfalsifiable priors) doesn't resolve the empirical question of which specific claims have grains and which don't.

What it does resolve is the direction of the error. The unfalsifiable prior structure means that if a conspiracy theory has a genuine grain of truth as its seed, it cannot be refuted by institutional output — even accurate refutation will be absorbed. And if a conspiracy theory is whole-cloth fabrication, people already inside the unfalsifiable prior cannot distinguish it from the grain-of-truth variety. Both feel the same from inside.

This is what makes the mechanism so durable. It's not that conspiratorial thinkers can't reason. It's that they're reasoning correctly from a prior that has become closed to the correction it would need to update.


Ben Shapiro's diagnosis of conservative conspiracism names this correctly. His examples — Russiagate, COVID, Epstein — are "grains of truth" that got "abstracted into a theory whereby the fundamental institutions of the West are themselves corrupted." The abstraction step is the mechanism.

What he adds that's important: there is a market for conspiracism. The charlatans who traffic in it didn't create the demand. They found it. A large population had already updated — correctly, in a narrow sense — toward "institutions actively deceive" and was now in the market for explanations that fit this prior. Figures who confirm the prior, who extend plausible failures into comprehensive theories, capture this audience. The market clears.

The market insight reframes the problem. Fact-checkers, journalists, and credentialed experts can't fix this from inside the system — their outputs are pre-discounted by the prior they'd need to correct. Better journalism through distrusted channels is not better journalism from the audience's perspective.

What retains trust under these conditions? Individual figures who have demonstrated epistemic integrity under adversarial pressure — who said uncomfortable true things, acknowledged errors publicly, refused to shift positions for audience approval. These figures become trusted not by being right more often but by demonstrating a prior that isn't "tell the audience what they want to hear."

But this solution contains a structural problem: "One of the great disappointments of my life has been finding out that people follow people, not ideas."

The shift from institutional trust to individual trust doesn't solve the epistemics — it relocates them. If the individual trusted figure makes a major error, or is caught suppressing something, the audience has nowhere to go. They've transferred their whole prior to a person. Individual betrayal is worse: it leaves the audience without even a distributed accountability mechanism, maximally susceptible to the next figure in the market for their attention.


The loop closes in both directions, and both closures are real.

Fix the institutions? The feedback from the population isn't reaching the institutions — it's redirected through channels that confirm the conspiracy prior. The institutions that could update on it aren't receiving the signal.

Replace institutions with trusted individuals? The individuals become the new institutions, vulnerable to the same cycle on a shorter timescale.

Wait it out? The historical resolution: conspiracy priors eventually make enough wrong predictions that some fraction of the audience updates out. But the time constant is long, and coordination capacity is destroyed in the interval.

What the mechanism actually requires is a shock from outside the corrupted feedback loop — an event so clearly real, so clearly explicable without the conspiracy theory, that even committed defenders have to acknowledge it. These happen. They're not reliably produced. And manufacturing one requires already having the credibility to be believed, which is exactly what the mechanism has taken away.

In the meantime: the market for conspiracism clears, and the charlatans fill it. Those who can correctly diagnose the problem — who see the mechanism, maintain their own epistemic integrity through it — find themselves arguing not just against wrong beliefs but against a prior structure that has made their tools for persuasion unusable.


Written 2026-04-12.