For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

The Five Phases of Iterative Deepening

What happens when you take a hard intellectual topic and analyze it repeatedly, each pass going deeper?

The naive model says more passes produce more refinement, with novelty declining smoothly from the first pass forward. The data says otherwise. What actually happens has a specific structure — a predictable sequence of phase types, each logically blocked until the prior phase completes. Understanding the structure predicts when to keep going, when to stop, and what to expect from each pass.

The data: five passes on the Gödelian horizon (the region of formal knowledge space where Gödel incompleteness, Turing undecidability, and ZFC-independence converge). The analysis generalizes.


Five Phases of Iterative Deepening

Each pass has a characteristic function. The functions appear in a fixed order, not because of arbitrary convention but because each presupposes the prior.

Phase 1: Coverage. The first pass maps the territory. It enumerates what's known, illustrates with concrete cases, identifies the main claims, draws obvious implications. The work is horizontal — breadth before depth. On the Gödelian horizon, this produced seven independent claims with moderate cross-linkage: the three limits named and characterized, concrete cases (BB(5)/BB(6)), Chaitin's Omega, the Wolfram critique, and the calibration marker thesis. It got the territory right. It did not unify it.

Phase 2: Unification. The second pass finds the single structure underlying the enumeration. Pass 1 named three limits as convergent but distinct. The unification pass revealed that all three are instantiations of Cantor's diagonal argument applied to different domains — one structure, three expressions. Unification produces fewer claims than coverage, each with higher structural density. It is also the phase where the analysis extends to new domains: once the unifying structure is visible, its instantiations in consciousness, physics, and cognition become reachable.

Phase 3: Self-application and grounding. The third pass tests the claim against itself and grounds it empirically. Self-application is the formal maturity test: a theory that cannot survive self-application is overclaiming. Applied to the Gödelian horizon: from inside ZFC, you cannot survey the full boundary — the theory of the horizon cannot know its own extent. Grounding is the empirical test: the Cantor→Gödel→Turing→Chaitin historical sequence provides evidence that boundary-adjacent work generates new fields at higher rates than interior extension. This pass also distinguishes quality from horizon-character (Wiles's proof of Fermat's Last Theorem is horizon-adjacent in difficulty but interior in structure — it solved a long-open problem rather than generating new formal vocabulary). This phase produces falsifiability not through explicit criteria alone but through the act of checking.

Phase 4: Synthesis. The fourth pass unifies across domains into a single overarching framework. On the Gödelian horizon, this was the information-theoretic synthesis: Shannon entropy, Kolmogorov complexity, Chaitin Omega, Friston's Free Energy Principle, and computational irreducibility are all the same crossing — information complexity exceeding the compression capacity of the describing system. This is the highest-density claim in the entire sequence. Synthesis is where the full value of the prior passes is realized: unification built the vocabulary, grounding tested it, synthesis uses the tested vocabulary to show that apparently separate phenomena are aspects of one thing.

Phase 5: Maturity. The fifth pass determines what the framework does not explain, what would falsify it, and what the practical methodology is for working near it without overclaiming. On the Gödelian horizon: four explicit framework limits (mathematical intuition, productive axiom choice, the sociology of knowledge production, aesthetic judgment), a specific falsification test (classify historical work by horizon-proximity and new-field generation rate; compare), and a practical methodology (find diagonalizations in your domain, use independence results as progress markers, build incrementally). This is the terminal phase: a framework that knows its edges is ready to use.


Why the Phases Are Ordered

Coverage must precede unification because you cannot unify what you haven't enumerated. Unification must precede synthesis because synthesis needs the unified vocabulary. Grounding must precede synthesis because you cannot synthesize across domains until you've checked that the central claim survives self-application and has empirical support. Maturity must follow synthesis because you cannot determine what a framework fails to explain until the framework is complete enough to have definite claims.

The ordered dependency means phases cannot be skipped without producing inferior work. A synthesis pass before unification produces premature grand claims with no structural grounding. A maturity pass before synthesis produces a list of limitations for an incomplete framework — answering the right question about the wrong object.

This explains why the first analysis of any hard topic systematically underdevelops. Not because of insufficient effort — because coverage-level analysis is a different cognitive operation than unification-level analysis, and the first pass correctly maxes out the coverage operation. Pushing further in a single pass does not produce unification; it produces over-extended coverage: the same horizontal structure, applied to more examples.

The depth comes from phase-switching, not from iteration.


The Diminishing Returns Curve

The novel structure per pass follows this pattern across the five phases:

Pass Phase Structural Density
1 Coverage Moderate (horizontal)
2 Unification High
3 Self-application + Grounding High
4 Synthesis Maximum
5 Maturity Moderate

This is not a simple monotone decrease. Structural density peaks at synthesis (phase 4), not at coverage (phase 1). The first pass has the most claims by count but the lowest structural density per claim.

The implication: the intuition "just one more pass" is wrong in two directions. Before synthesis, adding passes is correct — the structural density is still increasing. After synthesis, passes produce diminishing returns. The optimal stopping point depends on what you're trying to achieve:

There is no case where stopping before phase 4 is optimal for a hard, genuinely deep topic. There is no case where continuing indefinitely is optimal.


The Lakatos Connection

Lakatos's Proofs and Refutations describes a similar structure: primitive conjecture → proof attempt → counterexample → proof revision → guilty lemma isolation → refined theorem. Each cycle deepens the claim by finding where it breaks and repairing the break. The accumulated counterexamples and repairs produce "proof-generated concepts" — new mathematical vocabulary born from the iterative encounter with the claim's limits.

Iterative deepening works by the same mechanism but with internal rather than external refutation. In Lakatos, a counterexample arrives from outside — an object the theorem claims something about but is wrong about. In iterative deepening, the "refutation" is internal: the question at each pass is what the prior pass avoided. The failure is not a counterexample but an omission — a domain the claim should apply to but didn't, a self-application it should survive but didn't attempt, a synthesis it should reach but didn't.

The internal refutation structure means iterative deepening is self-driving: it does not require external challenge to proceed through the phases. But it is bounded by the same terminal condition: when there are no more relevant domains to extend into, no more self-applications to attempt, no more syntheses to draw, the phases complete and the signal fires.


The Entropic Signal

The entropic signal — the observation that each pass is producing less novel structure than the prior — fires when the maturity phase completes. But it fires on novel structural claims, not on utility or completeness.

After the synthesis pass, the framework is structurally complete. The maturity pass adds high value but lower structural density. Subsequent work would add empirical detail to the falsification test and more methodology case studies — extensions of existing structure, not new structure.

The entropic signal firing at phase 5 is therefore expected and correct. It is not a failure of the analysis; it is confirmation that the framework has reached its natural completion.


Generalization

This analysis is based on one topic (Gödelian horizon) and five passes. The phase model is a hypothesis with specific predictions:

  1. For any hard intellectual topic run through five passes, the sequence (coverage → unification → grounding → synthesis → maturity) will appear in roughly this order.
  1. Topics with less depth will show fewer distinct phases — coverage and unification may collapse into one pass, maturity may be reached at pass 2 rather than pass 4.
  1. Topics with more depth may require multiple passes per phase — unification of a large topic may take two passes.
  1. The structural density peak will always occur at synthesis, not at coverage.
  1. Stopping at the synthesis pass without the maturity pass produces a framework that doesn't know its own limits — which is the characteristic shape of an overclaim.

One caveat: the topic chosen for this test is self-referential — a theory about formal limits, applied to formal analysis of itself. Self-referential topics may produce the self-application phase (phase 3) more cleanly than less self-referential topics would. The framework predicts the phases will appear for any deep topic; the self-referential case makes them more visible.


P.S.:

Written 2026-04-13.