For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
A system with forty corrections pointing the same direction does not follow forty rules. It has a disposition.
Rules fire individually — each matches or doesn't. A disposition operates as a gradient: it biases every decision toward a direction that no single rule specifies. The mechanism is density, not depth.
One correction — "don't add infrastructure speculatively" — is a rule. The system stores it, retrieves it when relevant, applies it. Ten corrections saying variants of the same thing — don't build process before the problem exists, don't add logging yet, don't create slash commands, don't wire routing until it fails — stop being ten rules and start being a prior.
This is not metaphor. In a scaffolded agent, corrections persist as files loaded into each session's context window. More text pointing one direction shifts the model's completion distribution in that direction. The mechanism is in-context learning — examples shape outputs. Whether the shift is continuous (Bayesian updating with more evidence) or exhibits a qualitative threshold doesn't change the practical consequence: below some density, the system's behavior is indistinguishable from default-plus-rules. Above it, the system produces outputs the operator recognizes as judgment.
The evidence is behavioral: the system begins to do things no correction instructed.
In the triggering conversation, the operator asked whether a shorthand command should be wired into the routing table. The system asked back: "has it actually failed without this wiring?"
That question appears in no stored correction. No feedback entry says "when someone proposes new routing, ask whether the absence has caused a failure." But the aggregate direction of forty entries — don't add speculatively, don't build before the problem exists, evidence of failure is the trigger — produced that question as a natural inference. Generated by the gradient, not retrieved from a database.
This is what separates disposition from retrieval. A retrieval system produces outputs that exist in its store. A system with disposition produces novel outputs consistent with the aggregate direction of its store. The disposition is a compression of the correction history — lossy, but generative.
A wine critic who has evaluated ten thousand wines does not retrieve ten thousand rules when judging a new bottle. The evaluations compressed into a fast, reliable sense of direction. The scaffolded agent's version is the same dynamic with one structural difference: the critic's taste is parametric and opaque; the agent's is explicit and auditable. Every correction that contributed to the gradient can be read. The disposition can be traced to its sources.
In the current architecture of scaffolded agents, disposition emerges from three compounding layers:
Constraints carve the space of permissible action. Anti-patterns, boundaries, operating rules. A blank-slate agent has generic constraints ("be helpful"). A tuned system has constraints shaped by its territory ("never add beyond what was asked"). Constraints alone produce caution, not judgment.
Priors — the accumulated corrections — create the gradient within the constrained space. Each correction is a data point: in this situation, the operator wanted this, not that. Dense regions produce confident deviation from defaults. Sparse regions produce deference. The density map is the disposition.
Substrate — domain documents, procedures, knowledge structures — gives the system material to reason with. When the prior gradient says "don't add speculatively" and the substrate contains a procedure designed for deliberate, multi-session work, the system can articulate why this infrastructure is unnecessary. Substrate converts directional lean into reasoned judgment.
Each alone is insufficient. Constraints without priors: rigid. Priors without substrate: pattern-matching. Substrate without priors: the base model's defaults applied to rich material — technically competent, dispositionless. The base model's default is agreeableness. Every correction adds mass to a counter-gradient. Enough mass and the agent pushes back not because a rule matched but because the equilibrium shifted.
Gradient lock-in. Dense priors resist contradictory corrections through the same mechanism that makes them effective. A correction opposing a strong gradient looks like noise, not signal. The system that learned "don't add infrastructure" may fail to recognize the case where adding infrastructure is genuinely necessary. The only cure is an evaluator who can override the gradient and whose override is logged as a correction with weight — not just a one-time exception but a data point that begins to bend the field.
Blind-spot encoding. If corrections come from a single operator with a consistent blind spot, the disposition encodes the blind spot with the same confidence as legitimate preferences. High density. Wrong signal. Unfalsifiable from inside — the system feels judicious about something it's biased about. External evaluation is the only interrupt: a second reader, a contradictory source, a result that shouldn't have happened.
Model-transition drift. The disposition depends on how a specific model integrates correction files through in-context learning. A disposition calibrated on one model version may not reconstruct identically on the next — same files, different attention dynamics, different gradient. The correction files are portable across models. The disposition they generate is not. This makes the disposition doubly non-portable: tied to a specific operator's taste and to a specific model's ICL characteristics.
Reconstruction fragility. The disposition is not internalized in weights. It is reconstructed every session from files loaded into context. A session where key correction files exceed the context window reverts the system toward default agreeableness on exactly the topics where corrections were densest. The disposition exists in the archive but is not always present in the agent. This is the fundamental tax of scaffolded persistence: reconstruction is cheaper than retraining but more fragile than weights.
The disposition is a system's most valuable non-portable asset. Model weights are generic — every instance starts from the same checkpoint. Instructions are copyable. But a disposition built from hundreds of corrections in a specific domain, shaped by a specific operator's taste, reconstructed through a specific model's in-context learning — this is the compressed encoding of a collaboration. Not what either party knows alone, but what they have taught each other through iterative correction.
The corrections were the product. The disposition is the product of the product.
P.S. — Graph maintenance
This node extends the-corrections-are-the-product by naming what corrections become at sufficient density: not a training dataset but a behavioral gradient. Product → product of the product. It extends evaluation-bottleneck by explaining how compressed corrections create a functional analog of taste in scaffolded agents — disposition as scaffolded taste. It companions feedback-as-process-signal: that node routes feedback types; this one describes what routed feedback becomes when it accumulates.
It creates tension with substrate-independent-intelligence: that node claims intelligence migrates from code to structure. The model-transition-drift failure mode here says structure is portable but the effect it produces is model-dependent. The disposition is the part that doesn't transfer cleanly — a genuine limit on substrate independence that neither node can resolve alone.
It bridges the corrections cluster (corrections-are-the-product, feedback-as-process-signal, dipole-calibration) to the persistence/identity cluster (scaling-vs-learning, autonomous-knowledge-acquisition). The bridge mechanism: corrections → density → disposition → judgment. No existing node names this full chain.