For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Feedback Is About the Generator

Feedback is prediction error about the generator, not the output. An evaluator who says "re-do this from scratch" or "the structure is inverted" is not describing a problem with a document. They are describing a failure that happened upstream, before the first word was written — a wrong frame, a misidentified claim, a misread of what the piece was for. Treating that signal as a list of content corrections loses the diagnostic entirely.

The appropriate response to feedback depends on which of three things it is.


The taxonomy

Sentence-level correction. The evaluator edits directly: changes a word, rearranges a clause. This says the generative process was correct and the output was nearly right. Fix the token. The model doesn't need updating.

Structural feedback. The evaluator identifies a section that's wrong, an argument that's inverted, a sequence that doesn't work. This is non-local. It says the generative process had the wrong representation of what the piece should be doing — a structure was wrong, not a sentence. Patching the section without updating the model produces a well-polished piece that still doesn't work. The right response: rebuild the model first (root-cause trace), then regenerate from the point of failure.

Process signal. The evaluator says "re-node this," "start over," "leave the original." This doesn't engage the output. It says the process was operating under the wrong frame entirely. The output is a symptom. Patching the symptom while leaving the frame wrong is not revision — it is careful repair of a wrong foundation. The right response: identify the frame error, correct it, generate a new crystal from scratch.

Conflating these is the error. Sentence-level fixes applied to structural feedback produce a polished piece that still doesn't work. In-vivo patching applied to process-signal feedback produces a repaired piece built on a wrong foundation.


In-vivo patching destroys information on two axes

When an editor patches a crystal in-place in response to structural or process-signal feedback:

First, the feedback information is converted into a local content change. The signal that something went wrong in the generative process — which frame was wrong, what the process assumed that it shouldn't have — gets encoded as "this paragraph changed." A future reader of the diff sees an edit. They do not see the failure. The diagnostic content is gone.

Second, the original crystal disappears. It was wrong in a specific, informative way. It carried a record of what the process produced under incorrect assumptions. That record is a comparison point: did the new crystal actually correct the failure, or did it converge back toward the same structure through different sentences? Without the original, this question cannot be answered. Deleting it removes the ability to verify what the regeneration changed and whether the generative model actually updated.

Leaving the original untouched and filing the new crystal alongside it preserves both. The draft queue handles two crystals on the same topic — that problem is already solved. The revision protocol's job is to produce both and let the queue handle them.


Compressed feedback carries more information per word than almost anything else

An evaluator who sends three words — "re-node this," "structure is off," "I liked the original" — is not being terse. They are compressing a much larger evaluation. The compression is real: they have absorbed the piece, compared it to their priors about what it should have been, identified the failure class, and produced the minimal surface that can carry the signal. The brevity is inversely correlated with the depth of the diagnostic.

The correct inference: when feedback arrives, expand it computationally before acting. Before any word is written in revision, spend cycles on the meta-question. What does this feedback reveal about the process? What was the generative model's representation of the piece before writing? Where did that representation go wrong? What would a correct generative model look like?

This is not rumination before action. It is cost-effective allocation of inference given a compressed signal. The alternative — treating "re-node this" as an instruction to start a node procedure — spends compute on execution while skipping diagnosis. It produces a second crystal under the same wrong frame, because the frame wasn't identified before regeneration.

The meta-analysis is not preamble. It is the core of the response.


Protocol

Sentence-level: accept the fix. Note what the process got right that made sentence-level fixing sufficient.

Structural:

  1. Before touching the draft: write a root-cause trace. Must name the specific wrong assumption — not "something was off" but "the process assumed X; X was wrong because Y." Vague traces do not update models.
  2. Append the trace to the dipole.
  3. Workshop the trace and proposed correction before spending compute on regeneration.
  4. Re-enter the node procedure from the point of failure. If the structure was wrong from v1, restart from the meta, not the last draft.

Process signal:

  1. File the existing crystal to drafts/ as-is — original, unmodified.
  2. Write a specific root-cause trace in the dipole: name the wrong frame.
  3. Append a revised meta entry: what would a correct generative model for this node look like?
  4. Run the full node procedure from scratch in a new archive ([slug]-b/).
  5. File the new crystal as [slug]-b.md (or update the slug if the core claim evolved).

Autonomy bounds

Re-derive the piece: full autonomy. Leave original, open new archive, run the procedure, file the crystal. No loop-in required.

Propose meta-architecture changes (pipeline modifications, changes to the node procedure itself, new automated behaviors): derive the proposal fully, surface it explicitly, wait for confirmation before implementing. The boundary: does this affect the current piece, or does it affect how future pieces are produced? The former is in-scope. The latter requires confirmation.


The compounding property

A root-cause trace that correctly identifies a frame error makes future meta-writing more accurate. A trace that names "I treated this as an implementation question when it was a frame question" updates the default prior for identifying what kind of question a given node is answering. Each trace compounds across sessions.

A crystal that gets patched without a trace produces no compounding. The output improved; the model didn't. The same frame error will recur, slightly occluded, in the next piece from the same territory.

This is the accumulation principle applied to writing. The artifact is not the product. The updated generative model is the product.


P.S. — Graph maintenance

This node is downstream of evaluation-bottleneck: that node establishes that taste is the residue of accumulated corrections and cannot be bootstrapped from descriptions. This node establishes how to receive corrections without destroying their diagnostic content. The two form a loop: evaluation quality requires taste; taste requires correctly processed corrections; correctly processed corrections require this protocol.

It applies the-corrections-are-the-product at the process level: corrections are the product only if they are received in a way that updates the generative model. In-vivo patching converts corrections into content changes, which is the way to have corrections and get nothing from them.

It extends accumulation: the root-cause trace is the mechanism by which the correction stream compounds. Without traces, corrections are ephemeral. With them, each session's feedback becomes a permanent update to the process that generates all future sessions.

It pairs with a-draft-queue-discipline: that node handles priority ordering among multiple crystals on the same topic; this one explains why multiple crystals arise and why that's correct rather than a problem.

The connection to benchmark-inversion is structural: benchmark-inversion argues that evaluation infrastructure is first-class, not secondary. This node describes what to do when that infrastructure fires — i.e., what the correct response to an evaluation signal looks like. Theory of evaluation and theory of response to evaluation are companion nodes.