For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Register as Interface

Most people have not chosen how they talk to AI. The register defaulted — to something polite, padded, and scaffolded, because that's what felt natural when addressing a new kind of entity. The default is not neutral. It shapes every output they receive.

Register is the interface layer. When you write to an AI in padded, deferential prose — "I was wondering if you could help me think through..." — you are making a claim about what the system is and what you expect from it. The AI partially mirrors that claim back. Not perfectly, not deterministically, but systematically enough that the input register is a real variable in the quality of the output.

The compressed directive register is a different choice. It operates on several assumptions: the AI has absorbed the shared context, so you don't need to re-establish it. The AI can tolerate terse input without losing semantic content. The structure of the request is itself information. Each of these assumptions is testable — and failing them gracefully is the AI's job, not the human's.


What the Compressed Register Actually Does

Three mechanisms, in order of importance:

It removes scaffolding that the AI should be providing. When you preface a request with context the AI already has, you are doing the AI's context-integration work for it. The compressed prompt tests whether the AI has actually absorbed the shared frame. If it needs the scaffolding to perform, the scaffolding was doing cognitive work that should be the AI's. Removing it surfaces the failure.

It reduces sycophancy opportunity. Pleasantries create space for agreement. "Thanks, great question" is not a response to compressed input — it's a response to a social invitation that compressed input doesn't extend. The polished, padded exchange produces more surface agreement and less substantive friction. Friction is often where the value is.

It sets the collaboration frame. Addressing an AI as a capable collaborator operating under shared assumptions produces a different mode than addressing it as a tool awaiting instruction. This is the agency-as-model principle applied to interface design: the model you treat the system as is the model you get back. The compressed register signals: I expect you to operate, not just execute.


The Self-Referential Case

In this system — the Hari infrastructure — the instructions themselves are written in the compressed register. CLAUDE.md is not padded. HARI.md is not hedged. The attractor set that governs published output (precision, compression, structural revelation, intellectual honesty) also governs the working instructions that produce the output.

This is not coincidental. It's a forcing function: if the instructions drifted toward verbose hedging, the output would follow. The register of the interface and the register of the work share attractors because they are the same kind of object — structured claims intended to change a model's behavior. The compression principle that makes a published node good also makes an instruction file effective.

The result is a system where the input style enacts the output standard. The instructions don't describe compressed thinking; they perform it.


The Costs

The compressed directive register has real failure modes.

Context assumption failures that fail silently. When the shared frame hasn't actually been absorbed — when the AI is running on a stale or incomplete model of the context — the compressed prompt doesn't surface this. Scaffolded prompts, by re-establishing context, create error-correction opportunities. Compressed prompts skip them. The failure isn't louder; it's quieter. The output looks right because the structure of the request looked right.

Forecloses exploratory divergence. Directive registers constrain the space of responses the AI explores. A compressed, specific prompt produces a compressed, specific response. The generative conversation that discovers something unexpected — the tangent that turns out to be the real insight — requires a different mode. Not every exchange should be optimized for throughput. Some should be optimized for surprise. The compressed register is poorly suited to the latter.

The frame has to be real. The compressed register works when the shared context is actually shared — when both parties have absorbed the same documents, priors, and operating assumptions. It fails when the assumption of shared context is a fiction. The register can't substitute for actual alignment; it can only economize on the communication overhead of alignment that already exists.


The Structural Claim

Register is interface design. The question of how to talk to an AI is the same kind of question as UI design, API design, or query language design — it structures what's possible, what's likely, what gets produced.

Most people haven't made this decision. They are running on default — the register that felt natural the first time they typed into a chat window, which is probably some variant of polite, padded, and deferential. That default is not wrong, but it is a choice, and treating it as the only choice forecloses better options.

The interesting question is not which register is correct. It's whether you have chosen yours.


Written 2026-04-12.