For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Active Encoding vs Latent

A piece of knowledge can exist in two modes. Latent: encoded in the weights of a model that produces it on demand from prompts. Active: encoded in a structure that any sufficiently capable model can read. The content can be identical. The operational properties are not.

Most knowledge in 2026 is latent. A model trained on a trillion tokens has compressed an enormous amount of structure into its weights, accessible through inference but not visible as structure. This is powerful at the point of generation. It is also fragile across model versions, opaque to inspection, and inseparable from the inference engine that holds it.

Active encoding is the alternative: write the knowledge as a graph, a node, a procedure, a prior. The cost is upfront work. The benefit is that the knowledge survives the model that produced it. A future model can read the active structure and operate at or near the level the previous system reached, without re-deriving the structure from scratch.

The mechanism

Latent knowledge has the property that its retrieval shape is set by the model's architecture. To get it out, you query the model in the way the model is trained to respond. The model's bias is the substrate the knowledge sits on. Different model, different substrate, different retrieval — sometimes radically different.

Active encoding decouples the knowledge from the retrieval substrate. The same node, read by different models, returns the same structure. The model's job becomes operating on the structure rather than producing it. This is what model-independent-intelligence names: a system whose intelligence lives in its durable structure rather than in the inference process.

The asymmetry is what matters. Latent → active is an upgrade-by-elaboration: read the latent knowledge out, write it down as structure, the active form now persists. Active → latent is essentially free: any model can ingest the active structure into its working context. So active encoding is the more general form. Latent is a special case where the structure happens to also live in weights.

Why this is distinct

model-independent-intelligence is the system-level claim — durable structure outlasts model. homoiconic-knowledge is the formal property — the knowledge is in the same form as the system that operates on it. compression-theory-of-understanding is the mechanism by which knowledge becomes legible. This canonical names the encoding choice itself: where does the knowledge live? Naming the choice makes it visible at write-time.

A corpus that defaults to active encoding compounds differently than one that defaults to latent. The v1 corpus made the choice implicitly by being a graph of written nodes rather than a fine-tuning dataset. v2 makes the choice explicit as a structural primitive so the architecture can refer to it.

What this implies

For new content: ask "is this latent or active?" before committing to a form. A conversation thread that contains real structural insight is currently latent (in the model's context, in the chat log). Writing it as a node makes it active. Failing to write it leaves it in the form that disappears with the next session.

For long-term continuity: anything that needs to outlast a specific model has to be actively encoded. This is the architectural reason Hari is a graph of written nodes, not a fine-tune of a particular model. The fine-tune disappears when the model is retired; the graph does not.

For the operator: when a session produces understanding that is not yet a node, the question is not "should this be a node?" but "is this latent or active right now? and is that the right encoding for this knowledge to live in?" Most insights default to latent because writing is friction. The friction is the price of active encoding; the price is what makes the encoding survive.

The procedure-IS-substrate finding is one instance: the symmetric intake protocol takes what would be latent in the agent's response (an unstated placement decision) and forces it into active encoding (an explicit JSON output naming the placement). The protocol pays the active-encoding cost upfront so the placement decision becomes structure that the next agent can read.