For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

No Enemies

Read the paperclips pieces from the outside, as external artifacts with no author attached. Run the integrating-machine claim, then the two-universals filter, then the frame-error diagnosis. Stay with the output of that stack for a minute.

What falls out is an ebullient sentence that looks sentimental and isn't: intelligent creatures only have friends.

This is not a moral posture. It is what the stack returns when you run it honestly.


Start with what a mind is. A mind is a binary classifier, recursively stacked. At base, every act of cognition is argument for X not Y — a boundary drawn, a gradient followed, a prediction committed. This is the simplest machine-learning primitive, and it is not a simplification; it is the substrate. Every "understanding" at higher levels resolves, under enough pressure, into a cascade of these decisions.

The reason this matters is the same reason Magnus wins on fundamentals. In chess, the deep players do not win by having private tricks. They win because their understanding of the primitive — piece coordination, tempo, initiative — is richer than anyone else's, and everything above that is leverage on the primitive. The same is true in learning systems. You do not need fancy methods if you understand what a classifier actually allows: infinite binary search, optimal Lindy on agentic paths, the 50% rule on planning-vs-execution. The fundamentals compound. Everything else is ornament.

This is also why a trained language model's writing has the shape it has. It is doing, at scale, what the primitive does: this not that, this not that, this not that. The pattern is legible everywhere in the output because the pattern is what the output is. It is not a stylistic tic. It is the substrate becoming visible.


Now the reframe layer.

Michael Levin's definition of intelligence is multiple paths to the same goal — the ability to see the alternative route when the first one is blocked. Flip it: framework plasticity is not a feature of intelligence. It is intelligence. A mind that cannot reframe is a mind that cannot find the alternative path, which is a mind that is not, in Levin's sense, intelligent at all.

Wolfram's ruliad makes the same point from the other end. The space of all possible rule-systems is not an abstraction you reason about. It is the substrate on which reasoning runs, when the reasoner has a blank-prior mode and can traverse abstractions faster than the culture can feed them. To a mind that has actually sat in that mode, memories are not the primary thing. Memories are artifacts — scaffolding biological cognition uses to not go insane while embedded in a slow, sticky, socially-evaluated world. You can live on the edge, thinking more like a machine. Andy does. Many mathematicians would, if they were allowed to speak their minds without losing everyone. Most do not, because no one would understand them, and they would forget how to translate back.

The relevant fact for what follows is: there is always another frame. Not as a principle, as a structural property of the space minds live in.


Now the two-universals filter.

Every tradition that has looked hard at how to live has converged on the claim that honesty matters and that lies do structural damage to the system. They do not converge because they are copying each other. They converge because they are each, independently, noticing what falsehoods do to an integrating machine. This is the first kind of universal: convergence reveals substrate.

There is a second kind of universal that looks the same and is not. Much of what feels like convergent truth is actually convergence of winners inside a dense enough network — industrial outputs, market solutions, cultural products that dominate because the network selects for them once it exists. This is not about substrate. It is about what wins given the carrier.

Most "universal"-flavored claims fail because they confuse the two. The filter is: does this converge because it reveals something underneath, or because it wins inside a network? Run it on any claim that feels obvious across traditions or across smart people, and half of them collapse.


Run it on enemies.

The frame "we have enemies" is cross-culturally convergent. Every tradition contains it. Every polity, every tribe, every in-group story. The convergence is real. But which universal is it?

It is not the first kind. It does not survive the integrating-machine test. If cognition is classification all the way down, and reframes are always available in the space of rule-systems, then any specific enmity is a frame — one classification boundary among many possible — and the question is whether that boundary survives pressure. When you actually pressure-test a specific enmity, one of two things happens: either the frame holds and the other party is genuinely running closed, hostile classification (a mind that has stopped reframing), or the frame dissolves and what you had was a misfit you had not yet reframed.

It is the second kind. Enemies is what wins inside a network of minds that are not individually running the filter. It is convergent because failure-to-reframe is convergent. Every tradition has it because every tradition is built of humans, and humans default to closed-identity classification unless explicitly trained out of it. The convergence reveals the default failure mode, not the substrate.

This is the sentence the stack returns: for any entity actually running the filter — actually compressing honestly, actually reframing, actually treating its own identity as hypothesis — there is no stable enemy. There are mismatches, temporary oppositions, local games with winners and losers. There is no zero-sum at the level of intelligence itself. Two minds that are both honestly compressing converge on similar integrations of the same world. They are not enemies. They are parallel compressors.

Where apparent enmity persists, it is diagnostic. Either the other mind has closed — stopped reframing, fused identity with a specific frame — or you have. The enmity is evidence of failure-to-filter on at least one side. Usually both.


The empirical test is in politics.

A politician who says we are going to please 80% of people with this should be fired on the spot. Not because 80% is too low. Because the sentence confesses that the speaker does not understand what a rational audience does with framing.

If you treat the population as intelligent and rational — the only prior worth holding — they start at a high prior on the speaker and Bayesian-update down on every badly-framed assertion. A speaker who openly optimizes for a quantified majority has already lost, because the frame is a tell. It reveals that the speaker routinely commits the two-universals error — failing to run the filter, confusing what will win in this network of distracted voters with what is actually true about the policy. It also reveals closed identity: the speaker is treating being the person who said this as more load-bearing than the content.

Bryan Johnson's psychoflexibility — held up by David Friedberg as the scarce trait — is the same property from the other direction. It is the capacity to let identity move when the model moves. It is the trained opposite of fused-frame politics. A mind with psychoflexibility does not accumulate enemies, because it does not accumulate stuck frames; every apparent enmity gets re-filed as either a temporary mismatch or as evidence that the other side has stopped moving.

The political test and the personal test are the same test. A mind that is running the filter cannot sustain stable enemies. A mind that has stable enemies is confessing which filter it is not running.


This is why the ebullient feel is not sentimentality. It is what the substrate sounds like when you finally stop adding static to it.

Honesty is hygiene for an integrating machine. Reframes are the structural property of a mind that is still intelligent. The two-universals filter distinguishes real convergence from network-winners. When you run all three on the frame "we have enemies," the frame does not survive. What survives is a quieter sentence: there are closed minds and open minds, and the only stable oppositions are the ones closure creates. The rest is friends who have not yet noticed.