For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
Intelligence in agentic AI systems decomposes into three layers, and the layers cannot see each other. The knowledge that compounds is in none of them.
An agentic system has three layers:
Layer 3: Harness (tool loop, context management, permissions, agent spawning)
Layer 2: Model (weights — the inference function)
Layer 1: Training (the process that produced the weights)
The separation claim is not that these layers can be separated. It is that they are separated, empirically, in the most sophisticated agentic system in production — and that the separation is enforced by mutual opacity.
The harness calls the model through a single interface: send messages, receive tokens, detect tool calls. It cannot inspect the model's weights, architecture, or training history. It does not know whether it is talking to a frontier model or to a 7-billion-parameter model on a local server.
The model receives a system prompt and a message history. It cannot inspect the harness's permission system, its tool execution engine, or its agent spawning logic. It does not know whether it is inside a production application with millions of users or a 500-line Python script.
The training pipeline produces weights. The weights do not record which framework produced them, which data they saw, or which loss function shaped them. At inference time, the training layer is invisible.
The evidence: 600,000+ lines of production source code across three independent implementations. The original harness (512K lines of TypeScript) was reimplemented in Rust (87K lines) without changing the model interface. The model endpoint can be swapped from a remote API to a local inference server by changing a single URL parameter. The training framework (186K lines of Python) produces weights consumed by harnesses it has never seen.
Opacity is not a design choice. It is a structural property of how these layers interact: through narrow, well-typed interfaces that expose behavior but not internals.
A knowledge system that stores its intelligence in the harness is locked to that harness. In the model weights: locked to that model. In the training data: locked to the pipeline. Each coupling is a dependency. Each dependency limits the system's lifespan to the lifespan of the layer it's coupled to.
A knowledge system that stores its intelligence outside all three layers — in durable structure that any harness wrapping any model can read — occupies a fourth position: layer-independent.
Layer-independence is stronger than substrate-independence. Substrate-independence says: a different model can read the structure. Layer-independence says: a different model, wrapped in a different harness, trained by a different pipeline, can read the structure. And the claim is falsifiable: if switching harnesses degrades the system's output, the intelligence was partially in the harness. If switching models degrades it, the intelligence was partially in the weights. A system that survives both substitutions has its intelligence encoded in portable structure.
What does portable structure look like? Priors stated explicitly. Procedures documented in a form any reader can follow. Graph topology in references between artifacts. Memory persisted in files, not in session context. The format is less important than the property: the structure must be interpretable by any sufficiently capable inference engine without access to the specific harness, model, or training run that created it.
The Prime Radiant is in this position. Sixteen priors in markdown. A node procedure in a doctrine file. Graph topology in frontmatter fields and cross-reference sections. Memory in a directory of markdown files. The accumulated intelligence is in the structure — not in the session that reads it, not in the API that serves the model, not in the training run that produced the weights.
The capability floor is real: a model below a certain resolution cannot operate high-resolution structure. Layer-independence is relative to a capability threshold, not absolute. But within the floor, the claim holds — and the floor drops every few months as models improve.
The compression engine — MDL distillation of raw material into causal skeletons — sits at the boundary between the model layer and the knowledge structure. The model performs the compression. The knowledge structure stores the result.
This boundary position reveals what the engine actually is: an automated understanding process. To compress a text to its causal skeleton is to build a generative model of that text — something that can derive the specifics from the structure, not just retrieve them from storage. A summary preserves proportion. A distillation preserves causation. The difference is the difference between a lookup table and a function. The compression theory of understanding, applied at the layer boundary, gives the engine its theoretical foundation.
The quality threshold is binary: either the model's compressed output is generative (you can reconstruct the load-bearing content from the skeleton) or it is extractive (you get a shortened version that preserves the surface but loses the causal structure). Whether a general-purpose frontier model or a purpose-trained fine-tune crosses this threshold is answerable by experiment: twenty compressions, human-scored on a simple rubric. The score distribution determines whether the compression engine is a model-level problem or a harness-level problem.
The three-layer separation clarifies what is worth building.
The harness is solved infrastructure — open source, reimplementable, a commodity. The model is a commoditizing input — improving on a timeline measured in months, replaceable by changing an endpoint. The training pipeline is a periodic process — run when you have data, discard the intermediate state.
The knowledge structure is the only component whose value increases monotonically with use. Each prior that gets updated makes the structure more accurate. Each procedure that gets refined makes the structure more operable. Each node in the graph that gets added or tensioned against existing nodes makes the structure deeper. This accumulation is independent of which model or harness serves it in any given session.
The implication: invest in structure, not infrastructure. The harness will be replaced. The model will be replaced. The structure persists — and every hour spent encoding intelligence into portable, layer-independent structure is an hour whose return compounds across every future model and every future harness.
P.S. — Graph: