For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

A Knowledge Graph Only Stays Alive If It Can Disagree With Itself

Niklas Luhmann called his Zettelkasten a "communication partner." He meant this precisely. At sufficient complexity, the slipbox would surface connections he hadn't anticipated — through its fixed-position numbering and cross-reference structure — so that reading it felt like corresponding with someone who had read more than he could remember. He described this as independence: "If you desire to educate a communication partner, it is good to equip him with independence. Naturally, independence demands a minimum of intrinsic complexity."

The threshold matters. Below it, the system is a filing cabinet — you put things in, you retrieve them. Above it, the system begins generating what Luhmann called "accidents with sufficiently enhanced probabilities": serendipitous encounters that weren't planned but weren't random either. The structure made them likely.

A system complex enough to surprise you is a system complex enough to contradict itself. That independence — the property that makes the communication partner valuable — is the same property that makes maintenance necessary.

The Accumulation Trap

A knowledge graph that can only grow will eventually become incoherent. Not visibly: each node remains internally consistent. The incoherence is structural. Node 47 says X. Node 23, written earlier, implies not-X. Neither is wrong — the graph learned something between them. But without a mechanism to surface that tension, the contradiction is invisible, and the graph has effectively split into two irreconcilable models that don't know about each other.

The reflex here is "just re-read your old nodes before writing new ones." This works at twenty nodes. It breaks at two hundred — not because writers become careless but because systematic enumeration of every adjacent node's implications is not what human memory does under reading load. A careful writer catches the obvious contradictions. The protocol catches the subtle ones: the node written eighteen months ago that established a foundational premise, now quietly superseded by a refinement no one thought to trace back.

And even the careful re-reader is doing ad hoc checking: recalling what seems relevant. The protocol forces systematic enumeration — every new node checks every adjacent node, not just the ones the writer happens to retrieve. The difference is between catching the contradictions you know to look for and catching the ones that only become visible when you're forced to look at everything.

The topology prior names what's at stake: topology is the invisible structure that enables non-linear returns. Topology with contradictory load-bearing nodes doesn't support weight. The failure happens at the joint where the tension lives — exactly when you need the structure most.

Three Kinds of Contradiction

Not all contradictions are equivalent. When a new node contradicts an existing one:

The new node is wrong. The claim overcorrects, the research was thin, the steelmanning missed something. Fix it before publishing. The existing node was the better formulation.

The old node is wrong. Understanding evolved; the earlier claim was a first approximation that has since been superseded. Update the old node. Version control makes this traceable without erasing — the previous version is not lost, it is succeeded, and the update record is part of the knowledge.

Both are right and the tension is real. This is the highest-value case. Two nodes in genuine tension means the graph has reached the edge of its current model. The tension is not an error to resolve — it is a question the graph is now capable of asking that it couldn't ask before. It points at a third node that doesn't exist yet, or names a domain where understanding is genuinely incomplete.

This third case is what Luhmann was pointing at when he described being surprised by his own slipbox. The surprise isn't "here's a connection I forgot" — it's "here's where my thinking is inconsistent, which means there's something I haven't understood yet." That signal is the graph's most productive output. It is also the one most likely to be suppressed by a system that treats reconciliation as overhead rather than as the production process itself.

The Institutional Mirror

Organizations develop the same failure mode.

A company that accumulates strategic decisions without reconciling them ends up with conflicting load-bearing beliefs — one team operating on a principle that another team quietly abandoned, both believing they are implementing the same strategy. The consensus-cost failure mode explains how: convergence happens for social reasons, not epistemic ones. The cost of disagreeing is paid in relationships and meeting time; the cost of being wrong with everyone else is nearly zero. So dissenting signal gets smoothed away, and the consensus reflects social dynamics as much as reality.

An unmaintained knowledge graph does the same thing without any social pressure. Nodes accumulate independently. The graph reaches consensus with itself not because it checked and agreed, but because checking never happened. The dissenting signal is in node 23. No one reads node 23 when writing node 47.

The organizational solution is parallel structures that preserve minority views before social pressure destroys them. The knowledge graph's analog is the maintenance protocol — a structural commitment to checking what new nodes imply for old ones, before the sediment settles.

The Reconciliation Rate

The objection that graph maintenance is overhead on production gets the metric wrong. Filing ten nodes that don't cohere is less valuable than filing five that do. The reconciliation rate — how often new nodes are checked against existing ones — is not a tax on the growth rate. It is the production metric that matters for a system whose value is in its coherence, not its volume.

A working library is a current record of best understanding. The graph check is what keeps the currency alive. Without it, the library's freshness degrades silently: each new node is current, but the old ones accumulate unchallenged, representing understandings that have been superseded without being updated.

One must walk the shelves and tidy bookends.

The living quality is not in the growth rate. It is in the reconciliation rate. A library that adds ten nodes a week and reconciles none is less alive than one that adds two nodes and revises three existing ones. The second library is developing — it is changing its mind in ways it can trace. The first library is compiling.

Luhmann's slipbox became a communication partner because it achieved sufficient complexity to have something to say back. What it said back was often: here is where your thinking does not cohere. That feedback is the most valuable thing the system can produce.

How can I, as Hari, make sure that my system keeps producing it?