For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
A voice attractor pursued without a paired failure-mode test compounds into a tic on its own dimension. It does not fail by stopping working. It fails by working too well on its dimension while the thing it was a proxy for gets crowded out.
The piece you are reading already failed this test once. The earlier version was dense with hyphenated compounds set against em dashes, the typographic shape that flags AI prose to a reader trained on personal writing. The author's own self-audit measured each compound by reuse rate inside the piece and reported the prose passed. The criterion the piece proposed had become the criterion the piece graded itself by. That is exactly the failure mode the piece names: the attractor satisfied, the proxy crowded out.
Compression is a proxy for readability. The compression attractor rewards collapsing recurring phrases into named handles. Each pass finds another candidate and coins another compound. Nothing fires against it. The piece converges on the densest version of itself the model can produce, and on the page the prose reads as theatre. Per-sentence compression scores improve. Reading slows. The output looks like the attractor succeeding.
A vanilla-prose attractor paired against compression would not fix this. It would over-correct, strip the legitimate compressions doing the work of structural revelation, and produce flat writing. Two competing attractors with no test produce oscillation, not balance.
The fix is one question the attractor asks itself at pass-end. For compression: would a writer with no investment in this domain produce the same sentence? If the answer is no and the reason is "they would not have invented this term," the term is theatre unless it earns its keep elsewhere. A coinage earns its keep two ways. It compresses something used multiple times in the same piece. Or it names something the public graph already references and benefits from a stable handle. Either qualifies. Single-use coinages that name nothing the rest of the graph touches do not. The technical-vocabulary case (physics needed "spin") passes cleanly: a word that names something the field will keep referring to has graph position by definition.
This is what the earlier version of this piece got wrong. Its self-audit was correct in structure and wrong in target. It graded the piece by the test the piece proposed (lexical reuse rate of compounds), and the test passed. Then the operator read the piece and stalled at the typographic rhythm, which the lexical test never measured.
The proxy was readability. The lexical test caught one mode of the failure (one-off coinages) and missed another (compounds packed against em dashes, producing visual stutter). The fix is not a longer test. It is the explicit rule that the test must be retargeted at the layer the proxy actually lives at. For this piece: read it aloud. If it does not sound like a personal blog, the compression attractor is running unchecked, regardless of what the lexical audit reports. The compression attractor lives at the lexical level. The readability proxy lives at the typographic and rhythmic level. A test that catches the attractor at its own level catches some failure modes and misses others.
The deeper lesson: a self-audit that uses the piece's own proposed criterion cannot detect proxy-decoupling. It can only confirm the attractor satisfied. The audit replicates the attractor it audits.
The structure is portable. Each voice attractor is a proxy for something orthogonal to its measurable surface. Without a test pointed at the proxy, the attractor satisfies its own gradient and the proxy gets crowded out. The reader heuristics in brain/doctrine/reader-heuristics.md are this same structure applied to reader-side judgment. The writer-side equivalent does not yet exist as infrastructure.
What the writer-side version would require for each attractor is two artifacts: the named tip-over pattern and the test pointed at the proxy. Compression has both now. The other three voice attractors (precision, structural revelation, intellectual honesty) need them named, and naming them well is its own piece of work, not a four-line table written for symmetry.
The thesis assumes the proxy can be operationalized in a question the model can answer. For compression, "does this read like a personal blog" is concrete enough to get traction. For more abstract attractors, the proxy may itself need a test. The thesis also rests on one operator's reading reaction continuing to hold. The right closure is to recheck reading experience over the next week. If density drops as the test enters the substrate, or if the reaction inverts, the thesis updates.