For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

The Corrections Are The Product

When a surgeon corrects a resident's incision, the correction is more valuable than the incision. The incision is a single output. The correction is a transferable principle — it changes every future incision the resident makes. The patient doesn't know this happened. The surgeon's time report doesn't capture it. But the correction is where the teaching actually lives.

The same structure holds for anyone working seriously with AI.

The invisible output

Every session with an AI model produces two things: the visible output (the text, the code, the analysis) and the invisible output (the corrections the human made along the way). "No, that's a summary — I wanted the causal skeleton." "You're hedging. State the claim." "That's technically right but it misses the mechanism." These corrections are specific, contextualized examples of what good looks like in a particular domain. They are preference data.

The visible output is consumed. The article is read, the code ships, the analysis informs a decision. Its value is realized and spent. The corrections, by contrast, have unrealized value — they encode the human's taste, judgment, and domain expertise in a format that is directly usable as training signal. A preference pair (this output was rejected; this correction was preferred; here is the context) is the unit of model improvement. Every session generates these pairs. Almost nobody captures them.

Why corrections compound

Corrections are not random feedback. They are structured by the human's priors — their accumulated understanding of what matters in their domain. A physicist correcting an AI's explanation of entropy is applying decades of training. A writer rejecting a paragraph as "competent but dead" is applying a theory of prose they may not be able to articulate but can reliably enforce through correction.

This means the correction stream from a serious practitioner is a compressed encoding of their expertise. It is domain-specific, preference-rich, and irreproducible — no one else working with the same model would generate the same corrections, because no one else has the same priors.

The compounding dynamic: early corrections establish the vocabulary of quality. Later corrections refine it. A correction in session 10 that teaches the model "compression means causal skeleton, not shorter text" changes the baseline for sessions 11 through infinity. If captured and used for fine-tuning, each correction makes the next session start from a higher floor.

The moat that almost nobody is building

The current discourse about AI moats focuses on model weights (trainable by anyone with sufficient compute), proprietary data (defensible but static), and distribution (important but orthogonal to quality). Almost no one discusses the training signal generated by practice.

This is the structural gap: model weights are commoditizing on a monthly cadence. Proprietary data is a one-time advantage that depreciates as models become better at learning from less. But the correction stream from an active, serious practice is generative — it produces new training signal every day, and the signal quality improves as the practitioner's own understanding deepens. It is the only AI-related asset with monotonically increasing value.

The practitioners generating the highest-quality correction signal right now are not aware they are generating it. Their corrections evaporate into API logs owned by model providers. The model providers benefit from this diffuse signal; the practitioners benefit not at all from each other's corrections.

What this changes

If the corrections are the product, the strategic implications invert several common assumptions:

On tooling: The value of an AI session is not primarily the output it produces but the corrections it occasions. A session that produces mediocre output but generates three sharp corrections is more valuable long-term than a session that produces perfect output requiring no correction. The ideal AI collaborator is one that is good enough to be useful but imperfect enough to require taste.

On capture: Any practice generating serious correction signal should be logging it. Not because fine-tuning is imminent — it may never be. But because the signal is perishable: a correction not captured is gone. The cost of logging is near zero. The cost of not logging is the irreversible loss of an irreproducible asset.

On moats: The deepest moat in AI is not what you know but what you've corrected. Knowledge can be learned from text. Corrections can only be generated by practice. A practitioner who has captured 10,000 preference pairs from real domain work has something no model provider, no competitor, and no future entrant can reproduce. They have a compressed encoding of taste.

Where this breaks

Three conditions under which the thesis fails:

First, if frontier models improve faster than fine-tuning on corrections can add value. If Claude Opus N+1 is already better at your domain than a fine-tuned Opus N trained on your corrections, the corrections were redundant — general capability subsumed your domain-specific signal. This is an empirical question with no settled answer.

Second, if corrections don't transfer. A correction is context-dependent — it was made in response to a specific output in a specific conversation. If corrections don't generalize beyond their original context, the training signal is noise. Early evidence suggests corrections do transfer when they encode principles rather than preferences, but the boundary is not well characterized.

Third, if taste itself is not teachable. Some domain knowledge may be irreducibly tacit — enforceable through correction but not transferable through training. If the highest-value corrections encode something that fine-tuning cannot learn, the captured signal is a record but not a resource. This is the deepest open question.


P.S. — Graph maintenance

This node extends accumulation by identifying a specific mechanism: not just "whoever accumulates learning wins" but "the byproduct of accumulating — the corrections made along the way — is itself the most valuable accumulation." It extends human-ai-boundary by naming what the human contributes that the AI cannot self-generate: the correction signal, which requires taste and priors the model does not possess. It touches benchmark-inversion by implying that the human's value is as evaluator, not generator — the correction is an evaluation act. It creates tension with substrate-independent-intelligence: if intelligence lives in structure (priors, procedures, graph topology), and corrections are the mechanism that refines that structure, then corrections are the process by which intelligence becomes substrate-independent. The two are not alternatives but cause and effect.