For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Marginal Node Value

The non-obvious property of knowledge graphs: they don't saturate linearly. You'd expect each new node to add less as the graph fills in — diminishing returns. The opposite happens, up to a point.

A new node in a dense graph has more existing nodes to connect to. Each connection reveals a relationship. More connections mean more revealed structure. The marginal value of a new node, measured in new relationships exposed, increases with graph density — until the graph reaches the saturation point where any new node is fully expressible as a combination of what's already there.

This reframes the question of what makes a knowledge graph worth maintaining. The value isn't in any individual node. It's in the compound structure they create together, which grows faster than the node count. A graph of 50 densely connected nodes is not 5× better than a graph of 10. It's better by the square of the connection density — potentially much more.


Why this implies a relational definition of value

If node value accrues through connections, a node can't be evaluated in isolation. Its value is a function of the node and the graph it joins. The same claim, dropped into a graph that already has ten nodes nearby, adds much less than the same claim dropped into a graph that has nothing in that territory.

This is counterintuitive because we're trained to evaluate ideas on their own merits. Is this claim true? Is it well-expressed? Is it important? These questions have real answers, but they don't tell you the marginal value of adding this node here, to this graph, now. Two claims can be equally true and well-expressed while adding radically different amounts to the graph — one fills a structural gap, the other lands on already-covered territory.

The practical consequence: "is this a good node?" is the wrong question. The right question is "how much does this add?"


Why ELO is the wrong frame

ELO is a ranking system for zero-sum, transitive outcomes. It works for chess because: wins are universal (A beats B regardless of who's watching), transitive (A > B and B > C implies A > C), and zero-sum (one player wins at the other's expense).

None of this holds for ideas.

Ideas compound rather than compete. Reading one node often increases the value of reading another — they create context for each other. The relationship between good nodes is multiplicative, not adversarial.

Rankings are reader-relative. A reader who already knows the existential-risk literature gets less from a node about tail-risk reasoning than one who doesn't. The node that's "better" depends on what the reader already has. Rankings flip depending on prior knowledge.

And transitivity breaks: Node A may be more valuable than Node B for readers with background X, while B is more valuable for readers with background Y. There's no universal ordering.

The correct metric is something like marginal Kolmogorov complexity reduction: how much does this node shrink the minimum description length of the domain, given the reader's existing model? This is theoretically clean but practically uncomputable. The three-component framework below is the operational approximation.


Three components of marginal node value

Novelty — Does this node introduce a claim, mechanism, or structure not already expressible through combinations of existing nodes? High novelty means the graph can't route around this node. Low novelty means the graph has other paths to the same destination.

Bridge value — Does this node connect clusters that were previously unconnected? A node at the junction of two domains, showing they share a structural pattern, has high bridge value even if its standalone claim is narrow. This is the "aha" node that makes you see a familiar idea differently because it reveals it shares structure with something else.

Connection potential — How many existing nodes does this node illuminate, or get illuminated by, in new ways? This is distinct from bridge value: you can have high connection potential within a single cluster (deepening existing connections) without bridging to a new one.

A saturating node — one that produces zero additional structure, all connections already present — has zero marginal value regardless of how well-written it is.


Applied: scoring three nodes against each other and the live graph

Three nodes filed in a single run, scored on this framework:

grain-of-truth-mechanism

coalition-capture-fragility

the-irreversibility-premium


Draft vs. live as a filter signal

A draft competes against two baselines: other drafts in the same territory, and live nodes in the same territory. A draft outcompeted by a live node on all three components should either find its unique angle or become an update to the live node. A draft that outcompetes nearby live nodes is a strong candidate for publishing.

At the saturation extreme: if a draft is fully expressible as "read these three live nodes in sequence," it has zero marginal value and shouldn't be filed separately.

A 33:20 draft-to-live ratio means significant unharvested potential — whether real (drafts that genuinely add structure) or nominal (drafts that mostly duplicate existing territory) determines whether publishing more of them accelerates the graph's increasing-returns dynamic or approaches saturation faster than the count implies.