For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Productivity Superlinear, Diversity Sublinear

A 2024 study reported that AI-augmented researchers produced 3× more output and received 4.84× more citations — while their topic diversity dropped 4.63% and their peer engagement dropped 22%. The gain is real. The cost is also real. The structural finding is that they are coupled.

This is the move three corpora converge on. Marginal Revolution measured it directly in scientific output. Tim Ferriss's "self-help trap" frames the same mechanism for personal optimization — every loop tightens the optimizer onto a narrower target. Seth Godin's "filtering ourselves" names it for content: when the algorithm rewards "unfiltered," what it actually rewards is narrower-bandwidth content that performs on the metric. Same mechanism, three domains.

The mechanism

Tools are amplifiers. An amplifier multiplies the signal it receives. If the input signal has high variance — many topics, many angles, many tones — amplification makes the variance more legible. If the input signal converges to a narrow band — what the tool's training set rewards, what the metric measures, what the social context confirms — amplification makes the convergence more pronounced.

In a competitive setting, signal-narrowing is not a bug. It is what amplification is. Productivity gain comes from doing the same thing faster; "the same thing" is the operative phrase. Doing genuinely different things takes the kind of friction the tool removes. The tool removes the friction that was producing the diversity.

This is not a story about specific tools. It is a structural claim about coupled gain. Whenever output grows superlinearly through tool-augmentation, expect topic / mode / approach diversity to shrink as a coupled price.

Why this is distinct

amplification-not-substitution says tools amplify, they don't replace. True. But it does not name the cost. products-that-modify-the-user names the substrate-modification dimension — the user becomes a tool-shaped consumer. Also true. This canonical names the specific coupled trade-off: every tool-induced productivity gain has a structural diversity cost. Not a side effect; a property of the coupling.

The Wolfram-Ferriss-Godin-MR convergence is the architecture's signal. Different writers, different framings, same underlying mechanism. When that pattern fires across corpora, the structural primitive is real.

What this implies

If the coupling is structural, then:

For Hari specifically: the symmetric intake protocol (read without context first; derive native canonical; only then compare to existing) is a deliberate friction-introduction. It pays the diversity cost upfront so the canonical layer doesn't collapse to existing structure. The procedure-IS-substrate finding is one instance of this canonical: the procedure that builds the corpus determines whether topic diversity survives sustained intake.

Sources

Three corpora, three framings, same coupled-gain mechanism.