For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
When you try to evaluate a method of analysis, you have a problem: your evaluation is partly a product of that method. You can't fully step outside what you're examining. This is not a philosophical complaint — it appears concretely, as specific failures, whenever someone takes the evaluation seriously enough to push on it.
The failures are informative. They are the structure of what you were actually doing, rendered visible by the attempt to characterize it. The loop that doesn't close is not a defect. It's the mechanism.
The question: what happens when you analyze a hard topic repeatedly, each pass going deeper?
The procedure: run the same analytical approach across multiple topics, use the outputs to characterize the method, then evaluate the characterization. The specific instance: five passes on a single topic (the Gödelian horizon — the region where Gödel incompleteness, Turing undecidability, and ZFC-independence converge), two passes each on a second topic (the question of AI-assisted verification in science), and then a meta-analysis attempting to describe what the passes produced.
The meta-analysis found a five-phase model: coverage, unification, grounding, synthesis, maturity. It found that structural density peaked at synthesis rather than at the first pass. It identified an entropic signal that fires when the maturity phase completes.
Then the meta-analysis was challenged.
Derived from one data point. The five-phase model was built from the first topic's sequence alone. The second topic — the second arm of the same experiment — was not incorporated. A model derived from one observation and presented as a general pattern is a description of the data it was built from, not a characterization of the method.
Direction of deepening varied across topics. The first topic started concrete and encyclopedic; successive passes moved toward abstraction and unification. The second topic started abstract and definitional; its deepening pass moved toward concrete failure modes and minimum viable implementation. The five-phase model implies a consistent direction. The cross-topic comparison challenges this: the direction is not fixed.
No formal measure of improvement. The meta-analysis used "structural density" and "novel structural claims per pass" as metrics. These are intuitive, not calculated. When pressed to quantify the improvement curve — roughly 40%, 25%, 20%, 10% per successive pass — those figures were honest estimates, not measurements. The analysis had no independent measure of the thing it was characterizing.
These are not minor gaps. The model was less secure than it appeared because the data was less complete than the analysis assumed.
Here is what happened: the attempt to evaluate an analytical method produced an incomplete model. The discussion of that model revealed its incompleteness. The revelation produced new structure — the direction-variability insight, the quantification gap made explicit, the cross-topic comparison identified as missing data. The new structure is genuine. It wasn't visible before the attempt at evaluation.
This is the self-referential structure, appearing at the methodological level.
At the formal level, Gödel showed that any consistent system powerful enough to do arithmetic contains true statements it cannot prove. The evaluating system cannot close the loop on its own outputs — not because it's weak, but because of the structure of self-reference. The powerful enough condition is the same condition that generates the horizon.
The analytical version: any method powerful enough to evaluate complex topics cannot fully characterize its own performance from inside itself. The evaluation is partly a product of the method; the method cannot step outside its own products to assess them neutrally. A trivial method that produces only trivial outputs can be fully characterized, because the outputs don't generate new self-referential questions. A method that produces novel structure will produce structure whose quality cannot be fully assessed by the method that produced it.
The session demonstrated this: the meta-analysis was produced by the same analytical approach it was analyzing. It had the shape of that approach's outputs — structured, claim-dense, phase-organized. Its blind spots were the approach's blind spots: it found the data it was primed to find and didn't look as hard for the data it wasn't primed for.
The direction-variability is not arbitrary. When you analyze a topic, the shape of your analysis is partly determined by the concepts in the topic. The analysis reaches toward the topic's missing dimension — and what counts as "missing" is relative to the topic's existing character.
A topic that is concrete and encyclopedic on first pass is missing abstraction and unification. Successive passes supply those. A topic that is abstract and definitional on first pass is missing concreteness and failure modes. Successive passes supply those. The analysis and the topic reach toward each other.
This has a further consequence: analyzing a self-referential topic primes you to notice self-reference in the analysis itself. The session analyzed formal self-reference (Gödel, Turing, Chaitin) and then exhibited methodological self-reference (the evaluation of the analysis was shaped by the analysis). Not coincidence — topic-matching. The analytical approach inherits structure from its object. Analyzing the limits of formal systems made the limits of the analytical method more visible than they would otherwise have been. The topic provided the vocabulary for characterizing the method's own incompleteness.
This explains something about the session's unusual productivity: the topic and the meta-analysis were in the same conceptual territory. The Gödelian horizon sequence was improving the vocabulary available to analyze the Gödelian horizon sequence. The tools and the object were being refined in parallel.
The loop would close if the meta-analysis produced a complete and accurate characterization of the method. It didn't. Each pass produced a partial characterization — correct as far as it went, with specific identified gaps.
The gaps were addressed in further conversation, which produced a better but still partial characterization. That characterization is itself a product of the same analytical approach, with its own tendencies and blind spots.
Each pass produces structure that the prior pass couldn't see. Not because the prior pass was bad, but because the new structure only becomes visible once the prior pass exists to be challenged. The direction-reversal insight required the five-phase model to be stated, so it could be challenged by the cross-topic comparison. The five-phase model required the meta-analysis to be stated, so it could be challenged by the specificity of its data. The sequence is generative because each incomplete closure produces something for the next pass to work with.
If the first pass had been complete, there would have been nothing left for the second.
This is not a skeptical argument. It does not conclude that evaluation is impossible or that method characterization is hopeless.
The right relationship to your analytical tools is not "fully characterized and therefore correctly applied." It is "partially characterized, productively used, and iteratively understood."
The partial characterization is not a defect to be corrected before use. It is the normal condition. Every tool you understand well enough to use is understood incompletely. The use reveals the incompleteness. The incompleteness drives further understanding.
The failure modes are two. The first is treating the partial characterization as complete — applying the five-phase model as verified theory rather than a hypothesis built from one observation. This produces overclaiming, confidence calibrated to a formal result rather than a working hypothesis. The second failure is treating the incompleteness as disqualifying — refusing to apply the model because it isn't verified. This produces paralysis.
The productive position is between: apply the partial model, watch where it breaks, use the breaks as data. The break in the five-phase model (direction varies across topics) is more informative than a clean confirmation would have been. The break revealed a dimension the model didn't account for — which made it a better model.
The structure scales. A research program that evaluates its own methodology runs into the same loop. The philosophy of science is the most explicit case: Popper's falsifiability criterion, Kuhn's paradigm shifts, Lakatos's research programs are each attempts to characterize what science does, produced by methods that are scientific in character. Each is incomplete in ways the others reveal. None has closed the loop. All have produced genuine structure through the attempt.
The institutional version: a scientific field that assesses its own quality uses the standards the field has developed. Work that challenges those standards will be assessed against them and found lacking. The field's self-evaluation inherits the field's tendencies. This is why paradigm-challenging work is systematically undervalued by standard evaluation mechanisms — not from bad faith, but because the evaluation mechanism is a product of the paradigm being challenged.
What changes with scale is the time constant of the loop. Methodological self-reference shows up across a session. Paradigm self-reference operates across decades.
The reason iterative analysis produces genuine advances — despite the fact that each pass is incomplete and the loop never closes — is that the incompleteness is specific. The gaps are not random; they are the exact dimensions the current pass couldn't reach. The next pass can reach them, because the first pass exists to reveal them.
A complete characterization, if achievable, would be terminal. There would be nothing left to find. The incompleteness is what makes the next pass possible. The open loop is the engine precisely because it doesn't close.
At the formal level, the Gödelian horizon is where new mathematics comes from — Cantor, Gödel, Turing, Chaitin each generated new fields by encountering the limits of the current system. At the methodological level, the incompleteness of each evaluation is where the next evaluation's work lives. The same structure at different scales, with the same consequence: the gap between what the system knows about itself and what it actually does is not the problem. It is the source.
P.S.:
Written 2026-04-13.