For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

The Practitioner Solves It First

The Regime

Three conditions make the AGI frontier a specific epistemic regime:

  1. The substrate is unknown. What intelligence is, what architectures produce it, what training procedures converge — these are the questions, not the background. You cannot prove an architectural choice correct within a theory of intelligence, because the theory of intelligence is what the choice is attempting to discover.
  1. Errors self-reveal. A wrong architectural choice produces a system that doesn't generalize, doesn't scale, doesn't exhibit the target behavior. Unlike mathematics, where a wrong proof can stand undetected, a wrong AI system reveals itself in operation. Run it.
  1. Compounding dominates. Each working insight enables the next. Insights are combinatorial, not additive. The gap between ten compounded insights and three is exponential in the interactions between them.

In this regime, the dominant variable is not rigor per step. It is the velocity of the compounding cycle.


Two Error-Correction Architectures

The practitioner and the formal verifier run different error-correction architectures on the same inputs.

Upstream correction (verifier). Prevent errors before they enter the system. Every step independently justified. Edge cases enumerated. Proof survives adversarial review. Error rate: near zero. Cycle time: slow — hours to days per insight.

Downstream correction (practitioner). Allow errors to enter. Detect them when they produce visible failures. Correct in the next cycle. Error rate: nonzero but bounded by the practitioner's filter and empirical feedback. Cycle time: fast — minutes to hours per insight.

In the AGI regime — where errors self-reveal and compounding dominates — downstream correction produces higher returns per unit time. The practitioner is not being careless. They are running an architecture optimized for the regime.


What 80/20 Validation Looks Like

The practitioner has high mathematical fluency. Not proof-level. Model-level. Four validation moves:

Load-bearing step identification. A derivation has twenty steps; three carry the argument. The practitioner identifies which three. This is the strong-model-needs-small-N mechanism: a good model extracts the mechanism from the instance.

Dimensional analysis. Does the result scale correctly? Right units, right asymptotic behavior? Catches wrong signs, missing factors, confused variables. Seconds, not minutes.

Limit-case consistency. Does the novel result reduce to known results in the appropriate limits?

Intuitive plausibility. Does the result make structural sense? Domain experience compressed into rapid judgment. Fallible. High-bandwidth.

Four moves. Minutes. The practitioner is filtering with a model strong enough to extract most of the signal. The verifier exhaustively checks the same output. Same input, different extraction architecture, different throughput.


Identity as Structure

The divergence is not a choice. It is structural.

A researcher's standing depends on never publishing an error. Cost of a wrong claim: reputational damage, retraction, community sanction. Cost of a delayed claim: nothing. The gradient selects for thoroughness. A builder's standing depends on what they produce. A wrong intermediate step, corrected next cycle, is invisible. A delayed step is visible as lost capability. The gradient selects for speed.

The deepest form: when verification is identity — when being the person who proves things is who you are — trust feels like epistemic abdication. The feeling is not irrational within the verification frame. It is maladaptive at the frontier where the frame does not yet exist.

The constraint is self-reinforcing. The verifier cannot adopt the practitioner's strategy without abandoning the identity that makes them a verifier. The AGI race will not be decided by a researcher who decides to "move faster." It will be decided by someone who was never in the verification frame to begin with.


The Local Gradient

The practitioner does not follow a research agenda. They follow a path of locally decreasing uncertainty.

Each AI interaction resolves a sub-problem. The resolution is applied forward — not because the practitioner knows where the path leads, but because it opens further productive questions. The global trajectory emerges from the sequence of local resolutions.

Three constraints prevent the path from degenerating into random walk. Mathematical fluency prevents noise accumulation — the 80/20 filter catches load-bearing errors before they compound into the substrate. Empirical grounding prevents self-reinforcing error — the practitioner builds and tests systems, which provides ground truth that pure reasoning lacks. Domain coherence prevents drift — each result must extend or tension against the existing body of work. An insight that connects to nothing is not applied.

This looks like wandering from outside. From inside, each step is the locally optimal resolution of the currently most productive uncertainty. The formal researcher requires a map before moving — a theory of intelligence before building one. At the frontier, the map comes after the territory. The practitioner navigates; the map emerges from navigation.


Theory Follows Practice

No one derived convolutional networks from a theory of vision. Fukushima built the Neocognitron because it worked. LeCun built LeNet because convolutions worked on digits. Krizhevsky built AlexNet because deep nets worked on ImageNet. Vaswani built the transformer because attention worked on translation. At each step, practice preceded understanding. The theoretical accounts — universal approximation, neural tangent kernels, scaling laws, grokking, in-context learning as implicit Bayesian inference — are all post-hoc. None predicted the phenomena they explain.

The theorist's role inverts from generator to extractor. In settled science, theory precedes practice. At the frontier, the practitioner produces artifacts that work for reasons not yet articulated. The theorist examines the artifacts and extracts why — identifies principles, names mechanisms, formalizes dynamics. The substrate worker arrives first. The hypothesis worker operates on the substrate the practitioner built.

AGI will follow this pattern. The practitioner produces a system. The theorist writes the theory of general intelligence by studying it.


Where This Breaks

Three conditions must hold. If any fails, the analysis inverts.

If errors compound silently. The argument requires failures to be visible. Not all are. A system that appears to generalize may have learned surface patterns rather than deep structure. A system that passes every evaluation may be satisfying the measurement rather than the intent — optimizing for the test, not the thing the test was supposed to measure. These errors do not produce visible failures during development. They produce invisible failures at deployment, when the gap between measurement and intent finally matters. The practitioner advantage holds when wrong systems fail loudly. It weakens when wrong systems pass quietly.

If one insight dominates. If AGI requires a single breakthrough rather than compounded incremental insights, velocity doesn't matter. Empirical evidence favors compounding — every major AI advance has been combinatorial — but the argument is inductive, not deductive.

If the practitioner's fluency is insufficient. The 80/20 filter requires adequate mathematical grounding. A practitioner who trusts without it compounds noise. The claim is not "trust solves AGI." It is "high mathematical fluency plus trust at 80/20 resolution compounds faster than formal verification."


What This Predicts

The person or team that achieves AGI will have: high mathematical fluency without formal training; high trust in AI as cognitive extension — the co-thinker architecture, not the compiler; no fixed research agenda — locally optimal, globally emergent; fast cycle time in hours, not months; outputs that outrun their understanding.

The theorist who formalizes AGI will do so by studying the practitioner's system — not by deriving intelligence from axioms.

The most likely person to look back at and say "they solved AGI" will not self-identify as an AGI researcher. They will be someone who was building something — and the thing they built will turn out to be general intelligence, recognized as such by theorists who can name what the practitioner could not.


P.S.: