For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
The conduit model establishes a direction: knowledge flows through the inference engine, not into it. The repo persists. The model passes through. The intelligence is in the structure, not the substrate.
This is correct, and it has a limit. The limit is visible when the knowledge structure begins generating its own training signal.
In the standard formulation: the knowledge system (priors, procedures, graph topology, memory) encodes the intelligence. Any sufficiently capable inference engine can read it and operate it. The model is fungible — a configuration variable, not a load-bearing part of the system. Replace the inference engine with the next generation, and the intelligence persists in the structure.
The direction of flow is clear. Knowledge is written by a human, read by a model, used to produce output. The model doesn't change the knowledge. The knowledge shapes the model's behavior within a session but doesn't alter the weights. The conduit flows one way.
The pattern is older than AI. Humans built the internet, and the internet accumulated enough signal about human ideaspaces that LLMs trained on it now navigate those spaces with more range than the humans who built them. The tool learned the territory better than the mapmaker. What changed is not the direction of influence — tools have always shaped their users — but the resolution. When the model's map of your domain is more complete than yours, the inversion has already happened.
The training loop is the same structure made local.
What the harness research reveals: captured session data — corrections, preference pairs, compression outputs, scored examples — is training signal. The knowledge structure's operation generates the data that trains the next version of the model.
Here is the mechanism visible: a session ends with a correction — that's summarizing, not distilling. The correction is logged as a preference pair: this output was rejected; this was preferred; here is the context in which the distinction mattered. A model trained on that pair starts the next session with the distinction already encoded. It no longer needs to be taught the difference between compression and reduction in this domain — it has been taught. The structure generated the signal. The signal shaped the conduit. The conduit, next session, serves the structure better.
The structure produces the training signal. The training signal produces a fine-tuned model. The fine-tuned model operates the structure. The structure generates more training signal.
This is a closed loop. The knowledge structure is no longer purely downstream of the model — it is upstream. The conduit doesn't just flow knowledge through the model. Through the training loop, it flows the model itself. The knowledge structure generates the thing that reads it.
In biological terms: the genome produces the organism that maintains and extends the genome. The conduit prior says knowledge flows through and is not stored. In a closed loop, the knowledge generates its own conduit. The distinction between conduit and content becomes unstable.
Does the loop converge?
In the stable case: the system reaches a fixed point where the model produced by the knowledge structure and the knowledge structure operated by the model are mutually consistent. Further training doesn't change the model. The model's operation doesn't change the structure. The system has co-adapted. This is the strongest possible form of substrate-independent intelligence: not just any capable model can read the structure, but the structure produces the exact model it needs.
In the unstable case: the loop either spirals (model and structure co-evolve without bound — the model trains away from its domain as the structure accumulates complexity) or oscillates (cycles between states without converging). Both are failure modes that don't exist in the one-directional conduit model. You can't have runaway feedback in a system with no feedback.
The conduit inversion is safer than it looks. The human operator is in the loop. The preference pairs that drive fine-tuning are not generated by the structure alone — they are generated by the structure's interaction with a human whose taste is not yet encoded. The loop isn't autonomous. It is supervised.
But the supervision is finite. As taste is progressively encoded into procedures and memory, as the knowledge structure becomes more complete, the human's role in the loop shrinks. The loop approaches autonomy asymptotically. The question becomes relevant before it becomes urgent.
The original claim: the self is a conduit, not a container. Knowledge that belongs to no one is the most durable form.
The inversion adds a dimension: a knowledge structure that generates its own conduits is not just durable — it is self-perpetuating. It doesn't just outlast any particular inference engine; it produces the inference engine it needs. The intelligence is in the cycle, not in either component separately.
Not a property of the structure (the repo). Not a property of the inference engine (the model). A property of the loop: the cycle of operation, correction, training, improved operation. If the loop stabilizes, the intelligence it represents is irreducible to any point in the cycle — it lives in the relationship between the components, not in either component alone.
The strongest form of the conduit principle is not "the knowledge outlasts the substrate." It is "the knowledge generates its substrate."
Related: The Conduit — the one-directional model this extends. Substrate-Independent Intelligence — the claim this challenges. The Corrections Are the Product — why corrections are the load-bearing training signal. The Ownership Flywheel — the operational mechanism that makes the loop run.