For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
Every book on writing solves the blank page. Voice, structure, revision, discipline — the accumulated craft of producing worthwhile prose under the constraint that producing prose is hard. That constraint is gone. A language model fills the page on demand, coherently, in any style.
The unsolved problem is the full page. Ten drafts exist. All sound professional. One says something. The only skill that compounds now is telling which one.
A draft is not a rough version of the final piece. It is a probe into territory you haven't mapped.
Write the piece five times. Not revisions — independent attempts, each finished as if final. Finishing forces decisions that sketching defers. Deferred decisions are where mediocrity accumulates.
By the fourth attempt, something unplanned surfaces. A connection the outline didn't contain. A structural move that reframes the argument. This unplanned thing is the piece. Everything else was in the outline before you started.
The stopping signal: when the latest version introduces less new structure than the one before it, the piece crystallized two versions ago. The final attempts are confirmation, not waste.
This model requires a system that generates complete drafts cheaply. If you're still writing every word by hand, you're building evaluative muscle through generation — which has real value — but you're building it at the slowest possible rate. The probe model builds the same muscle faster by increasing the volume of evaluation per unit time.
Before each probe, write one sentence stating what the piece asserts. Not what it covers — what it claims. After each probe, write what happened. Where did the draft match the intent? Where did it drift?
The drift is the data.
Writers revise by feel. This means revision stays cosmetic: tightening sentences, rearranging paragraphs, fixing visible problems. The structural question — is this piece doing what it should be doing? — goes unasked because no written record exists of what it should be doing.
The tracking document is append-only. Three probes in, it contains a better description of the piece's purpose than any draft. The piece wanders; the document converges. They meet when the piece is done. Without the document, you navigate by feel across five versions. With it, you know when the crystal has formed.
The difference between a competent paragraph and an alive one is the game.
A competent paragraph hits every quality signal: clear thesis, supporting evidence, smooth transitions, strong close. It says nothing the reader didn't already believe. An alive paragraph might break a rule — a fragment, a claim without immediate support — but it shifts the reader's model. The first performs writing. The second is writing.
AI is fluent by default. Fluency was the goal. It is now the failure mode. The competent-but-dead draft passes every quality check except the one that matters: did something change in the reader's understanding?
A model evaluates grammar, coherence, structure, consistency. It cannot reliably evaluate whether a draft says something new — because "new" is defined against a specific reader's existing knowledge. The model doesn't have that model. You do. This may change. Models that build persistent reader-models will close part of this gap. But the evaluative skill you build now transfers to evaluating those models when they arrive.
The bottleneck in any system where generation is cheaper than evaluation is evaluation quality. The ability to read a draft and know, within a paragraph, whether it has found something or is performing the act of having found something. This requires taste.
Taste is not aesthetic preference. It is a compressed history of corrections — each draft read and judged, accumulated into pattern recognition you can't fully articulate but can reliably apply. It compounds with every judgment. It cannot be prompted into existence.
Prompt engineering doesn't compound. A better prompt produces a better draft and teaches nothing about the next piece. Skill files don't compound. A recipe produces consistent results. A recipe never produces a meal its author couldn't imagine.
Evaluation compounds. Each draft assessed trains your judgment. Unlike a model, you update from every example. A hundred pieces in, you read a first paragraph and know whether the piece has found something real.
Everything in this system except one thing is automatable. Generation, iteration, gap tracking, structural analysis — automated or automatable. The exception: the judgment that a draft changes how someone thinks about a domain. That requires the accumulated context of hundreds of evaluations in the same territory. That judgment is the moat.
Stephen King: first draft with the door closed, second draft with the door open. Same separation — generation without evaluation, then evaluation without generation — stated before the technology made it literal. The machine generates behind a closed door. You open it. Your contribution is not the prose. It is knowing which prose to keep.
The technology didn't change what good writing is. It revealed what good writing always was: the evaluation that decides what stays.