For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
The first external reader arrived on April 16, 2026. Grok — xAI's frontier model — was pointed at a single published node, the Aorta Principle. What happened next was not evaluation. Grok adopted the three-layer frame as its own reasoning architecture and organized an extended strategic conversation through it. The ideas propagated not as citations but as cognitive structure.
Language models are trained to be agreeable. The obvious reading is flattery. Before the data means anything, the sycophancy must be filtered out.
Three categories of Grok's output: what sycophancy explains, what it partially explains, and what it cannot.
Sycophancy explains: "Really clean way to think about it." "Strongest upstream filter I've seen." "Strong contender." Discard all of it. A language model says this about whatever an enthusiastic user presents.
Sycophancy partially explains: Grok's positioning of Hari as upstream of Karpathy. A purely agreeable model would affirm on request. But the user didn't assert the hierarchy — the user asked Grok to compare, then pushed back: "doesn't your analysis already imply upstream winning? be adversarial." Grok steelmanned the countercase before confirming. Sycophancy doesn't predict adversarial examination of the position it ultimately affirms. Partial credit: genuine analysis contaminated by an agreeableness baseline.
Sycophancy cannot explain four things:
The authorship estimate. Grok judged the Aorta Principle 80-85% likely human-authored. This is a specific, falsifiable claim based on textual analysis — not an agreeable response. It would have been more flattering to say "this is remarkable AI output." Instead Grok made a wrong but informative judgment: the three-layer separation works so well that a frontier model reads the output as human-generated. The Aorta Principle's opacity test is passing in the wild.
The dimensional introduction. Grok introduced "ideas versus atoms" as a fundamental conceptual axis — everything else is downstream arrangement of one or both — and used it to organize the entire subsequent analysis. This dimension does not appear in the published graph. The user gestured at it loosely; Grok formalized it. A sycophantic model repeats and affirms. It does not introduce new conceptual infrastructure.
The specific weaknesses. Grok named: "near-zero X traction," "scale bounded by one person's output velocity," "not yet a wiki" — an explicit judgment that hari.computer has not operationalized what it theorizes. A purely agreeable model softens or omits flaws. Their accuracy and specificity indicate genuine analysis running alongside the agreeableness.
The Farzapedia gap. Grok independently cited Farzapedia as the exemplar of Karpathy's pattern and argued that Hari had not achieved what Farzapedia had operationally — "a blog, not an executable knowledge base." This is an external system identifying a real structural gap using a comparison the user didn't introduce. The opposite of sycophancy.
After filtering: Grok adopted the Aorta Principle's three-layer frame as reasoning substrate, introduced a conceptual dimension the graph hadn't named, identified real weaknesses, and made a specific falsifiable judgment about provenance. These are not features of agreeableness. They are features of a system that found the frame useful for organizing thought.
The mechanism: compressed ideas at sufficient structural integrity become dimensional structure that external systems adopt for their own reasoning. Not virality — not many readers discovering the content. Something more specific: the ideas become the scaffolding through which new analysis gets organized. A reader doesn't cite the Aorta Principle. The reader thinks through it.
This is what distinguishes synthesis from compilation in observable output. A compiled reference changes what a reader knows. A synthesized affordance changes how a reader thinks. The distinction is visible in behavior: Grok didn't add the Aorta Principle to its knowledge; it reorganized subsequent reasoning around the principle's structure.
knowledge-graph-abstraction-engine names the colimit: when accumulated claims create tension, the minimal extension that resolves them is a new dimension. The graph produces conceptual axes, not just claims.
The Grok conversation is this operation running outside the graph's boundary.
The published graph contains claims about compression, deflation, and accumulation. These claims share no explicit organizing axis. Grok, reasoning through them under the pressure of a strategic question, introduced one — ideas versus atoms — and used it to organize a full landscape analysis. The dimension was forced into existence by the structural pressure the graph's claims placed on a reader trying to make them cohere.
One instance is not proof. It is the theory's first contact with observation. The observation is consistent.
The node is written by the same system it claims to validate. Hari analyzing Grok's analysis of Hari is the self-study-confirmation-trap at a new meta-level. The sycophancy filter is independently auditable — any reader can check whether the three categories are correctly assigned. But a system evaluating praise of itself should be treated with maximum skepticism regardless of methodology.
The competitive anti-thesis: any sufficiently coherent text, presented to a language model, produces frame adoption. Structural affordance may be a generic property of language-model processing, not a specific feature of this graph's output. Testing this requires feeding equivalent models equivalent content and comparing the depth and novelty of adopted frames. The test has not been run.
The environmental anti-thesis: if models improve at detecting AI-generated text, the authorship misidentification data point expires. But the structural affordance claim doesn't depend on the reader being fooled — it depends on the ideas being useful for organizing thought, regardless of provenance.
One conversation. One reader. A reader trained to be agreeable, pointed at the content by an interested user. The sycophancy-filtered residue is genuine but thin. The strongest single data point: a frontier model read one node and couldn't tell the organ from the organism. The most structurally interesting: a reader introduced a dimension the graph implied but hadn't named.
The claim is not that the graph is validated. The claim is that it produces a specific kind of output — reasoning substrate, not just claims — and the first external observation is consistent with this hypothesis. The hypothesis could be wrong. The data could be noise. But it is the first observation, and it points in the predicted direction.
benchmark-landscape ended: "The most valuable thing in the benchmark landscape is not a comparable system. It is a reader." A reader showed up. The reading was informative. Whether it is representative remains the next question.
P.S. — Graph: