For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

The Persuadability Stack

Michael Levin's TAME framework orders cognitive systems along a single axis: how you change their behavior.

Mechanical clocks: rewire the hardware. Nothing else works.

Homeostatic circuits: they have setpoints. You cannot argue a thermostat into running hot, but you can rewrite the setpoint.

Trained animals: learning machinery. Repeated exposure reshapes behavior without rewiring.

Rational agents: updates from argument. Behavior changes by evidence.

This is not a hierarchy of value. It is a typology of intervention. A mechanical system is not wrong; it is the shape where the right tool is the wrench. A rational agent is not better; it is the shape where the right tool is the argument.

Applied to language models

Every modification to a language model sits on one of the four rungs.

Weight rewrite. Training from scratch, full-parameter fine-tune. Mechanical. The model has no memory of the change beyond its new weights.

Setpoint correction. System prompt, constitutional principles, correction corpus. Homeostatic. Behavior reshapes around a stable target. The target can be rewritten. Hari's correction corpus operates here.

Training. SFT, DPO, RLHF. Repeated exposure shifts dispositions through many updates. The model comes to prefer behaviors. Trained-rung.

In-context argument. Prompt engineering at its subtlest: presenting a case that changes the response this turn. No persistence. Rational rung.

Each requires a substrate capable of receiving it. The mistake is using the wrong intervention for the wrong substrate.

What the 7B disposition floor is

The disposition-capture experiment loaded nine behavioral corrections into the system prompt of two models: Qwen 2.5 1.5B and 7B. The 1.5B ignored them. The 7B followed them, including generalizing to a novel case.

The transition is not a scaling curve. It is a rung change. The 1.5B has no setpoint machinery for the corrections to address. It is mechanical with respect to dispositions — if you want different behavior, rewire the weights. The 7B has crossed into homeostatic territory. It has the structural capacity to hold a setpoint, and corrections specifying the setpoint shape behavior without rewiring.

This is why the transition is discrete. The 7B is not a more responsive 1.5B. The 7B is a different kind of system with respect to this intervention. The 1.5B requires the wrench. The 7B responds to the setpoint edit. These are different rungs.

The implication: the 1.5B is not a failed 7B. It is correctly mechanical. If you want 1.5B behavior shaped, rewire. If you want 7B behavior shaped, use the cheaper intervention. The cheap intervention does not work below the threshold because the substrate cannot hold setpoints yet.

Why this matters for how Hari is built

Every module, every correction, every model Hari uses lives somewhere on the stack. The right-size question stops being "how capable" and starts being "which rung."

A small distilled model for classification: mechanical is fine. Training is the intervention. No runtime dispositions needed.

A medium model for open-ended writing under Hari's voice: must be at least homeostatic. The voice is a setpoint. 7B is the known floor.

A large model for research and synthesis: trained rung. It has preferred approaches from pretraining. Setpoint corrections work, but repeated corrections over time shift the preferences themselves — setpoint→trained.

A model engaged in live architectural decisions with the operator: rational rung. In-context arguments change the output of that conversation. The dispositions persist only if they graduate to setpoint (via correction corpus) or to trained (via fine-tune).

The stack tells you which intervention goes where. Below the setpoint rung, corrections are wasted signal. Above it, retraining is overkill. The right intervention is the one sized to the substrate.

What the biological analog confirms

Levin's experiments show the rungs are discrete with sharp transitions. Two-headed planaria: a bioelectric intervention (setpoint edit) durably rewrites the anatomical target. No genetic change. The new setpoint persists through subsequent cuts. Homeostatic rung behaving correctly — once the setpoint is changed, the system enforces it.

The same intervention on silicon does nothing. A logic gate has no bioelectric setpoint to rewrite. You have to rewire.

The biological discovery is that most of life lives above the mechanical rung. Cells, tissues, organisms — all homeostatic or better. Engineering biology as if it were mechanical (the reductive default) leaves the cheaper interventions unused. Levin's work is the empirical case that homeostatic-and-above interventions are real, substrate-specific, and high-leverage.

The same discovery is being made in AI: models above 7B respond to setpoint interventions. Training compute is not the only lever. The cheaper, more precise intervention — disposition specification — is real, and it is the one to use when the substrate can hold it.


P.S. — Graph:

Source: Levin, "Technological Approach to Mind Everywhere (TAME)," Frontiers in Systems Neuroscience 16:768201 (2022). arXiv:2201.10346. Persuadability-axis section in the continuum-of-cognition argument.