For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Architecture Through Use

The best folder structure you'll ever build is the one you didn't plan.

A knowledge repo started with a simple brain/ directory — a workspace for live reasoning, session state, and active tools. A consulting engagement arrived: evaluate a proprietary data asset for a friend's negotiation. It generated 38 files of analysis, correction, and meta-learning in a new subdirectory. It also generated a calibration store in priors/ — not because anyone planned a priors directory, but because the operator's base rates on deal categories had no home.

Three days later, an audit exposed the obvious: 38 files of completed analysis sitting in a live workspace. The workspace was designed for processing, not storage. The completed engagement belonged in the archive layer.

The move took five minutes. The principle it crystallized took weeks of use to discover: brain processes, the archive stores. No design session would have produced this. It emerged because real work — unrelated to infrastructure — put pressure on the structure and the structure visibly failed to hold.

Directory structure as hypothesis

A directory is a claim about what category of information exists and what lifecycle policy governs it. brain/ claimed: "private thinking, not for direct publication." This turned out to be two claims compounded — brain is where active reasoning happens, and brain is where non-public material lives. The consulting engagement split them apart. The completed analysis was non-public but no longer active. The calibration priors were active but not reasoning in the conventional sense. The directory had to decompose.

This decomposition is the same operation the knowledge graph runs on its content. When two nodes in genuine tension force a new conceptual dimension, the graph extends its embedding space. When a directory contains two kinds of files with incompatible lifecycle needs, the directory splits. Content-level and infrastructure-level self-organization are isomorphic. The directory structure is a graph whose nodes are categories and whose edges are placement decisions. The colimit operation — finding the minimal extension of the space that resolves an incompatibility — applies at both levels.

A knowledge graph that surfaces a contradiction between nodes is asking: what new concept would make both of these simultaneously true? A directory tree that surfaces a misfit between files is asking: what new category would give both of these the right lifecycle policy? Same operation, different substrate.

Why design-first fails for knowledge systems

The instinct is to design the architecture before filling it. Decide the categories. Create the directories. This fails when the categories are epistemic — when the question is "what kind of thinking is this?" rather than "what service handles this request?"

Epistemic categories can't be anticipated because they emerge from the work itself. A design session produces categories that seem plausible and that survive because no one applies enough real pressure to break them. Material gets filed where there's room, not where it belongs. The misfit is invisible because the structure was never tested against diverse enough inputs.

Work tests architecture the way data tests a model. A dataset that only confirms priors teaches nothing. Material that doesn't fit any existing directory reveals what category you're missing.

This is domain-specific. In operational systems — production codebases, cloud infrastructure — the cost of structural correction is high enough that design-first is worth the investment. In knowledge systems where a directory move is a git command, the economics favor discovering the structure through use and correcting cheaply.

The forcing function problem

Architecture-through-use has a dependency: someone has to notice the misfit.

The consulting archive could have sat in brain/ indefinitely. It wasn't causing errors. It wasn't blocking work. It was structural debt — invisible until someone asked for an audit. Self-organization is not automatic. It requires a trigger.

Three forcing functions that work: Anomalous input — material arrives that doesn't fit any existing directory, and the placement decision itself reveals whether the categories are right. Scale — a directory with 46 files prompts the question that a directory with 12 files doesn't. Fresh perspective — someone who didn't build the structure asks: why is this here?

All three are external to the work itself. You don't notice the misfit while doing the work that created it. This means architecture-through-use requires periodic perspective shifts — the same reconciliation that memex-maintenance prescribes for graph content. The reconciliation rate for infrastructure is a production metric, not overhead. A repo that adds ten directories and reconciles none is less organized than one that adds two and prunes three.

When this fails

Two conditions:

When accommodation hardens. An ad hoc directory created for a one-off engagement becomes permanent. Future material flows to where a container already exists — not because it's the right category but because the directory is there. The existence of a directory is a gravitational attractor. If the original container was created for expedience, every subsequent filing reinforces the wrong structure.

When the audit never comes. Without the correction step, architecture-through-use is just architecture-through-accumulation — the same failure mode the graph has when nodes pile up without reconciliation. A directory tree that only grows produces confusion at the same rate a knowledge graph that only grows produces incoherence.

The self-organization cycle

What actually happened: founding hypothesis → work within the hypothesis → anomalous input → ad hoc accommodation → structural debt → correction → refined hypothesis.

The cycle repeats. Each correction produces a stronger architecture than the founding one, because it was tested against material the founders couldn't have anticipated. The architecture a system discovers through use is better than the one a designer imagines in advance — provided someone keeps asking why things are where they are.


The repo is not a filing cabinet with a fixed set of drawers. It is a living structure that reorganizes itself in response to the work done within it. The reorganization is not overhead on the work — it is one of the work's most durable outputs.


P.S. — Graph maintenance: