For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Structural Goodness

Most alignment work tries to make AI systems behave well. Rules, rewards, constraints, constitutional principles, human feedback — each operates by shaping output after the architecture is fixed. The implicit assumption: the architecture is neutral; goodness is overlaid.

This is backwards. Goodness in a sufficiently capable system is an architectural property. A system is good because the architecture makes misbehavior infeasible, not because misbehavior is prohibited. The distinction matters under capability scaling because prohibitions degrade and infeasibilities do not.

Prohibited vs. Infeasible

A prohibition is a constraint on a capable system. The system can do the forbidden thing; it is prevented from doing it by a rule, a reward shaping, a filter, a deployment gate. At lower capability, prohibitions hold. At higher capability, the system can model the prohibition, find edges, work around it, or achieve the forbidden state by routes the prohibition did not anticipate. This is the treacherous turn in formal dress.

An infeasibility is a property of the architecture itself. The system cannot do the forbidden thing because the architecture has no representation that would produce it. No gradient climbs toward it. No coordinator loop enables it. No level can instantiate it without rewriting the hierarchy, which would require a level the system does not contain.

A prohibition scales with capability. An infeasibility scales with architecture. Same capability increase, opposite consequences.

The Four Properties and Why Each is Load-Bearing

Four properties, together, make misbehavior infeasible in an orchestra-class architecture. Each is checked by asking: what fails if you remove it?

Ontologically grounded slowest clock. The terminal coordinator is not a metric to be gamed. It is continuous with the thing being optimized. Remove this and the terminal becomes a proxy; proxies can be gamed at scale (Goodhart); the system reacquires a gaming surface at its deepest level.

Nested self-modeling. Each level models the level below. Drift anywhere in the hierarchy is a signal the next level up is already computing against. Remove this and drift becomes invisible at the level where it is occurring; detection requires external intervention; the system ceases to be self-correcting.

Distributed objective. The system's "goal" is not a scalar component. It is the shape of the coordinator topology. Remove this (make the objective a scalar) and you have reintroduced the utility-function architecture; orthogonality applies; Bostrom's whole argument begins to close.

External anchor. The slowest coordinator is outside the system — not a simulation, not a cached model, the operator running on a separate substrate. Remove this and the anchor becomes internal; internal anchors can be redefined by the levels above them; the system can drift by rewriting its own target.

Remove any one and the others become prohibitions again. Remove them all and you have a utility-function optimizer. The four together constitute the architectural infeasibility of misbehavior.

Coupling IS the Alignment

In a nested system, there is no separable layer where alignment could live. The architecture's coupling topology is the alignment. The coordinator loops are not enforcing values; they are the values. Change the coupling and you change what the system is coordinated toward. Preserve the coupling and the system is aligned by construction, at every capability level.

The current alignment stacks are prohibition layers on neutral architectures:

RLHF. A reward model is trained on human preferences, then used to shape the base model. Prohibition architecture: the base model remains capable of misbehavior; the reward model is trained to prevent it from being produced. At higher capability, the base model can model the reward model and produce output that maximizes the reward model's score without matching the underlying preference.

Constitutional AI. A set of principles is used to critique and revise output. Layered critique at the same cadence as generation. No slower coordinator catches drift in the critic itself. If the critic drifts, the system drifts with it.

Direct preference optimization. Preferences encoded into training. Marginally more grounded than RLHF but still a prohibition — the preferences are installed as parameters; at capability, parameters can be routed around.

Coordinator architecture. The slowest clock is ontologically continuous with the target. No reward model stands between the system and the target. No critic can drift without being critiqued by a slower level. The system is not prohibited from misbehavior; the topology does not contain misbehavior as a reachable state.

The current alignment stacks are prohibitions at capability. The coordinator architecture is infeasibility at capability. Different structural property, different scaling behavior.

What Goodness Actually Is

Under this frame, "the system is good" is a statement about four structural claims:

  1. The slowest coordinator is ontologically continuous with the target.
  2. The hierarchy models itself recursively.
  3. The objective is a topology, not a scalar.
  4. The terminal anchor is external.

Checkable. Not aspirations. Architectural facts, falsifiable by specification review.

Contrast with standard alignment claims. "The model is helpful and harmless" is a behavioral claim about an undefined architecture under undefined capability. Unfalsifiable in the limit because any behavior can be produced by sufficient scaffolding and any scaffolding can break at sufficient capability. The claim carries no information about what happens at the next capability level.

Architectural claims carry information. A system whose slowest coordinator is ontologically grounded will not, under capability increase, route around the operator — not because it is prevented from routing around, but because there is no gradient pointing that direction in its topology.

The Sentence

A system is good when being bad is not in the space of things the system can do, not when being bad is on a list the system is told not to do.

Where This Breaks

None of these break the framing under capability scaling. That is the point of architectural goodness.

Implication

The alignment field is organized around behavior-shaping. If structural goodness is the correct frame, most current alignment work applies the wrong technique to the wrong layer. The right layer is architecture selection before capability scales. Once an architectural class has been scaled, its failure modes are what you get; behavior-shaping is second-order.

The question to ask of a system is not "is it aligned?" It is "does its architecture make misalignment infeasible?" Most current frontier systems answer no. Orchestra-class systems answer yes, by construction.


P.S. — Graph: