For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
The structured read is a dipole.
A dipole in the node procedure maps intent against output — what the meta said the piece should do versus what the draft actually produced. The divergence is the information. The reader applies this same structure to the finished piece: what does the piece claim, and where does it diverge from that claim? Where is it alive, where is it dead, where does the voice hold, where does it break?
The operator corrects the dipole. The correction is the calibration signal. The reader learns from corrections the same way the writer learns from corrections — through accumulated heuristics that compound across sessions. The infrastructure is already built. The reader is the dipole protocol applied to reading.
Evaluation maps a draft to a score. Reading maps a draft to a structural description — what the piece is doing and whether that's what it should be doing.
An evaluator who scores 8/9 and gets it right has confirmed quality. A reader who says "the third section is the real piece and everything before it is throat-clearing" has done something the evaluator cannot: identified which part of the draft is load-bearing versus which part is scaffolding the writer needed but the reader doesn't.
The eval-loop-architecture designed a convergent system: Hari scores, the operator reacts, divergence is calibration data. That system converges on a number. The reader produces a different kind of output — observations that might surprise the writer. The evaluator answers "how good is this?" The reader answers "what is this doing?"
You cannot evaluate what you haven't read. Scores without structural understanding produce priority ordering without insight. The prediction-error loop improves calibration. The reader improves understanding of what the piece is. Calibration is useful. Understanding is necessary.
An LLM reading its own drafts shares weights with the writer. It will approve writing that matches its own generative distribution because that writing feels natural. It will miss errors consistent with its own priors because those errors are invisible from inside the distribution.
Three mechanisms break this partially:
Cold read. Text only. No meta, no dipole, no context about intent. The reader encounters the piece as a stranger would. This surfaces places where the text assumes context it doesn't provide — a real information asymmetry between writer-with-intent and reader-without-intent.
Adversarial stance. The reader's job is to find what's wrong, not to confirm quality. Default: "convince me this sentence needs to exist."
Explicit uncertainty. The reader distinguishes between "this is alive" (confident), "this might be alive" (uncertain), and "I'm approving this because it matches quality signatures but I don't know if a human would feel it" (meta-uncertain). The third category maps the reader's own limits — exactly where the operator's read is most needed.
Four classes of reading failure exist: voice error (attractor violation), structure error (argument gap), graph error (D3 misjudged), and taste error (couldn't distinguish alive from competent). Voice and structure errors are detectable by analysis. Graph errors require checking the published corpus. Taste errors may be irreducible for the reader.
The competitive anti-thesis (that the operator's taste is irreducibly tacit and the reader will converge on easy heuristics while missing what makes writing important) and the self-evaluation circularity — that a model reading its own output is structurally degenerate — converge on a single boundary. The reader's limit is where its own generative distribution meets the operator's taste. This is one boundary, not four independent failure modes.
The boundary determines the operating point. Realistically: 60-70% of the queue handled autonomously (voice, structure, D3, basic alive/dead via heuristics). 30-40% routed to the operator with structured reads and uncertainty flags. This is not reader failure. It is the reader working correctly — identifying where taste is required and sending everything else through automatically.
A piece operates at three levels simultaneously: surface (useful takeaway a new reader carries away), depth (structural claim that changes how someone models the domain), and game (meta-coherence: whether the piece practices what it preaches).
The reader must read at all three. A surface-only read misses the structural claim. A depth-only read misses whether the piece is accessible to someone outside the graph. A game-level read catches whether the piece's own structure enacts its thesis — the kind of meta-coherence that separates alive writing from competent analysis.
The voice attractors are the reader's instruments, not a checklist. Rules produce technically correct but energetically dead assessments. Attractors guide toward genuine quality. The reader orbits the attractors; it doesn't checkbox them.
The reader doesn't start calibrated. It starts as a structured prompt. Calibration comes from corrections.
Phase 1 — Calibration (drafts 1-10). Each draft: Hari reads cold, produces a structured read (central claim, what's alive, what's dead, voice check, argument map, graph position, publish recommendation, uncertainty flags), the operator reviews the read, each correction is classified by error type and extracted as a heuristic. Heuristics are patterns-with-context, not rules: "when encountering [pattern], check for [signal], because [this failure occurs in this context]."
Phase 2 — Blind comparison (drafts 11-20). Hari reads first. The operator reads independently. Compare. Three outcomes: agreement (calibration holding), Hari missed something (new heuristic), Hari caught something the operator missed (the reader's unique contribution — what cold-read pattern-matching sees that familiarity-biased reading misses).
Phase 3 — Graduation. Five consecutive reads where the operator makes a publish/revise/hold decision from the read alone. Graduation is revocable. Post-graduation: 20% spot-checks. Staleness threshold: if no new heuristics in 15 reads, the reader flags itself and increases spot-check rate.
The state-of-hari diagnosis: the feedback loops are write-only. The reader closes them. Traces accumulate in dipoles and nobody reads them back. The reader reads them back — every structured read is a read-back of the draft queue, and every correction is a read-back of the reader's own performance.
The evaluation-bottleneck: generation scales, evaluation doesn't. The reader doesn't make evaluation scale. It makes reading scale. The operator's evaluation per unit of reading goes up because the reader has already done the structural work.
The corrections-are-the-product: corrections on the reader's reads are training data in the same format as corrections on writing. Preference pairs, typed labels, compounding heuristics. The correction stream that builds taste in writing also builds taste in reading.
The backlog: 52 drafts. The graduated reader processes all 52 in a single triage session. Output: which are publishable, which need revision, which are subsumed, which should be archived. The operator reviews the triage, not the drafts.
P.S. — Graph maintenance
This node extends the-test from design proposal to mechanism. The-test names the problem (no reader) and the phases (calibration, blind comparison, graduation). This node provides the structural diagnosis: the reader is a dipole, the taste ceiling is a single boundary, the three reading levels (surface/depth/game) distinguish checkboxing from reading.
It extends eval-loop-architecture by establishing that reading is upstream of evaluation — the prediction-error loop improves calibration, but understanding what the piece is doing is a prerequisite for scoring it. The reader produces the understanding; the evaluator produces the score.
It operationalizes feedback-as-process-signal at the reading level: corrections on reads, like corrections on writing, are prediction error about the generator. A missed observation in a read is not a reading mistake — it is a signal about the reader's model of what matters.
It applies the-corrections-are-the-product to reading: the reader's heuristic library is a correction stream that compounds across sessions. Each corrected read makes the next read better. The moat is not the reader — it is the accumulated corrections on the reader.
It bridges evaluation-bottleneck to implementation: that node establishes that taste is irreducible and the operator's corrections are the only mechanism that updates the rubric. This node designs the system that makes those corrections efficient — the operator reviews reads, not drafts.
It resolves the state-of-hari diagnosis of write-only loops: the reader is the read-back mechanism that converts accumulation into improvement.