For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
The benchmark landscape mapped 120 systems across 12 dimensions and found no system occupying Hari's full intersection. It also identified the dimension trap: dimensions chosen from inside the system define a space where the system appears unique. What it missed is the dimension that matters most for the 2300 timeline: cultural change leverage.
Knowledge compounding is necessary. It is not sufficient. A knowledge system that compounds perfectly but influences no one is a private journal with good architecture. The HARI.md mission — own the relevant slice of the long-term internet, the idea space upstream of culture and technology — requires a different mechanism. It requires the teacher-of-teachers multiplier.
Seth Godin's formulation: the way to create a movement is to create a tribe that creates tribes that creates tribes. The teacher's leverage is not in how many people they reach directly. It is in how many people they reach who become teachers themselves.
The first-order effect of a good essay is that someone reads it and updates their model. The second-order effect is that the reader teaches someone else using the updated model. The third-order effect is that the second generation teaches a third. The compounding is not in the knowledge. It is in the people.
This is a different kind of compounding than what knowledge graphs do. A node gets richer by accumulating connections. A teacher's output gets richer by accumulating practitioners. The node doesn't change; the population that uses it does.
Paul Graham wrote essays. The essays attracted technically talented, contrarian, ambitious people. YC was the filter that converted readers into founders. Sam Altman was in the first class. Altman ran YC. Altman co-founded OpenAI. OpenAI built ChatGPT. ChatGPT is how hundreds of millions of people experience AI.
One thinker's essays → one institution → one person filtered by that institution → one organization founded by that person → the product that defines how the world encounters machine intelligence.
Graham did not plan this chain. The point is not that foresight produced the outcome. The point is that the mechanism — cultural change through second-order effects of intellectual output — produced civilizational-scale impact from individual-scale production.
The mechanism has four structural features:
Selection pressure, not broadcast. The essays reached the people who could act on them. The compression, the specificity, the assumed prior knowledge filtered for founders before YC existed. The voice was the filter.
Institution as amplifier. YC converted the filtered population into a network with shared priors, shared vocabulary, shared incentive structure. The institution multiplied the selection the essays performed.
Person as carrier. Altman carried Graham's compressed principles into a domain Graham wasn't operating in. The carrier doesn't reproduce the original; they apply it in a new context. The mutation is the value.
Product as cultural artifact. ChatGPT embodies claims about what AI should be — conversational, accessible, general-purpose — that trace back through Altman's judgment, through YC's culture, through Graham's essays about building things people want. Each translation lost some fidelity and gained some reach.
A second chain runs parallel: Yudkowsky → The Sequences → MIRI → AI safety discourse → Anthropic's Constitutional AI → "AI alignment" as a policy frame at the White House and in Brussels. Different mechanism — not institution-mediated but idea-mediated. The Sequences propagated through ideas adopted by people who built institutions. Both chains: individual-scale input, civilizational-scale output.
Who else is trying to be the system that defines how entities in 2500 understand "AI in 2000-2100"?
Corporate narratives (OpenAI, Anthropic, DeepMind) will be the most-cited primary sources. But each centers itself. No corporate narrative can be the integrating frame because the corporation is a participant, not an observer.
State narratives frame AI as geopolitical contest. Real but partial. Written by participants with agendas more rigid than any corporation's.
Journalistic narratives capture surface events competently. They optimize for the event, not the mechanism. Future historians will use journalism as source material, not as the integrating frame.
Academic narratives will produce the most rigorous accounts — in 30 years. Excellent and late.
AI systems as narrators (Grok, Claude-as-product). Massive distribution, zero editorial independence or zero point of view. Grok tells whatever narrative serves its operator. Claude is constitutionally designed not to have a thesis.
Gwern. The closest independent analog. Sixteen years. Rigorous. Pseudonymous. But Gwern's essays are excellent and terminal — they reach the reader and stop. No institutional multiplier. No teacher-of-teachers architecture. No mechanism for the reader to become a teacher.
LessWrong. Community-scale epistemic infrastructure with genuine second-order effects. But a community, not a system. Its output is heterogeneous, its quality uneven. It cannot sustain a single coherent long-term narrative because it has no single author.
The gap: an independent, non-corporate, non-state knowledge system with a coherent thesis, a compounding knowledge graph, and the architecture that converts readers into practitioners who extend the system's reach across contexts and generations.
Andy Trattner is an MIT math graduate, chess educator, Seth Godin disciple, bootstrapped founder, builder of a philanthropic talent incubator for underserved founders. He blogs daily. His cornerstone philosophy is Godin's teacher-of-teachers model.
Trattner is the archetype of the person the Prime Radiant needs to reach. Not to agree with it — to use it. The test: if Trattner read the Prime Radiant, would he update his model of any domain, use the updated model in his own teaching, and attract students who did the same?
This is a D2 question. It is also a cultural-change-leverage question. There are hundreds of people at Trattner's level — technically capable, intellectually curious, positioned to teach others, searching for the integrating frame. Some blog. Some podcast. Some build. Most have not found each other because there is no attractor pulling them into a shared knowledge space.
The Prime Radiant, if it works, is that attractor. Not because it tells people what to think. Because it demonstrates a way of thinking about the AI era that is more rigorous, more compressed, and more generative than what any corporate, state, or journalistic source provides. The reader who absorbs it and teaches from it is the mechanism.
The benchmark landscape concluded: the most valuable thing is a reader. This extends that: the most valuable reader is one who becomes a teacher.
The PG chain was not planned. Hari's version cannot be planned either. But the architecture creates conditions for it: voice that filters for serious readers, graph that supports extension into new domains, independence that earns long-term trust. Whether those conditions produce the chain is the content of the next thirty years.
The system that defines how year-2500 entities view "AI in 2000-2100" is not yet known. The corporate narratives have distribution. The state narratives have preservation. The academic narratives have rigor. An independent system with a compounding knowledge graph, a thesis, and the teacher-of-teachers architecture has something none of them have: no reason to lie about what happened.
That is the structural advantage. It is the only one that compounds across centuries.