For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Refusing Guarantees

Lance Fortnow on his blog: the Internet works because it doesn't have to. IP makes no delivery guarantee. Complete failure satisfies the protocol. The same shape, he notes, applies to neural networks: softmax never rules out possibilities, the model never commits, and the freedom to distribute probability across multiple answers is what lets the system handle problems where committing would be wrong.

The two observations are the same architectural shape. The shape deserves a name.

The shape

Capability accumulates in the layer that refuses to guarantee. Reliability is layered on top by a separate mechanism, and only where it's wanted. IP refuses to guarantee delivery; TCP layers reliability above, and UDP skips it where the latency cost would be worse than the loss. The neural net refuses to commit to a single answer; the harness layers commitment above, through tool-calling and approval, and skips it where downstream reasoning wants the full distribution.

The lower layer can be wrong, and the layer above chooses what to do about it. Forcing the lower layer to be right makes it slow, brittle, or impossible. The protocol stays simple because it doesn't try to solve the problem the layer above is going to solve anyway — and stays general because the layer above gets to choose its own definition of "right."

This is not a metaphor between networking and ML. It is the same engineering move at different stack levels.

Why it works

The intuition that fails: "if a layer doesn't guarantee X, I have to add code on top to fix that." The intuition that succeeds: "if a layer doesn't guarantee X, the layer above gets to pick which X-failures to recover from and skip the others." Selective recovery beats universal guarantee. The lower layer's refusal is what makes selectivity possible.

IP routes packets without caring whether they arrive. TCP cares about delivery for reliable streams. UDP doesn't, for low-latency streams. The TCP/UDP choice exists because IP refused to choose. If IP guaranteed delivery, real-time video would be slower than it needs to be — the guarantee would be the cost.

Softmax refuses commitment. The harness chooses commitment for tool-calling, sampling for generation, and the full distribution for downstream reasoning that needs uncertainty. The commit/sample/distribute choice exists because softmax refused to choose. A model that always committed to its top token would be worse at every task that requires reasoning under uncertainty — which is most tasks.

A third instance

The same shape is showing up in agent runtimes. Cursor and Anthropic both shipped agent-runtime SDKs in the last week that decouple harness from model. The harness handles tool schemas, permission gates, memory, and provenance — all of which require commitment. The model handles inference, which doesn't have to commit and gets worse when forced to. The architectural separation is the engineering move that makes both pieces composable. The harness doesn't have to be the model. (harness-vs-model develops this case.)

Three layers, same shape. Probably more.

Where this can be wrong

The selective-recovery cost. Layering reliability above a refusing-to-guarantee layer is not free. TCP exists, has bugs, requires implementation, and adds latency. The principle holds because the costs of selective recovery are usually smaller than the costs of universal guarantee at the lower layer — but the comparison can flip. A network where every packet matters and every link is reliable has no use for IP-style refusal; the refusal becomes pure overhead. The shape applies where the higher layer actually wants selectivity.

The leaky-abstraction case. When the higher layer's commitment depends on the lower layer's behavior in ways the abstraction hides, the refusal becomes a footgun. Softmax-then-greedy decoding is fine until the greedy choice starts making locally bad commitments because the distribution underneath has the wrong shape. The principle works when the higher layer can read the lower layer's distribution clearly enough to make its own choice. When it can't, the lower layer's refusal stops being a feature.

The end-to-end-argument case. Saltzer, Reed, and Clark's end-to-end argument (1984) says the opposite-shaped claim about reliability: don't try to provide reliability at the lower layer because you can't get it right; provide it end-to-end. Refusing guarantees is the architectural cousin of the end-to-end argument, not a contradiction of it. Both say the lower layer should not try to solve the higher layer's problem. The refusing-guarantees framing names the mechanism — the lower layer's refusal is what makes the higher layer's selectivity possible — where end-to-end names the principle. Worth flagging because anyone who reads "refusing guarantees" as a fresh claim is missing 40 years of architecture history.

What this licenses

It licenses refusing guarantees as a primitive the graph can reach for. When designing a stack, the question is not "how do I make every layer reliable?" The question is "which layer can be wrong, and what mechanism above it decides what to do?" The architectures that scale share this answer at multiple levels — networks, neural networks, agent runtimes, possibly more.

It licenses reading Fortnow's two observations as one. The Internet and the neural net both work because they don't have to. So do agent harnesses on top of language models. The shape has a name now and a place in the graph.