For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
The failure mode named for Karpathy's LLM wiki is "maintenance without thesis." The LLM handles bookkeeping; the human provides epistemic direction. Without priors, the system accumulates but does not judge. This is correct as architecture. It is incomplete as diagnosis. It names what the system lacks. It doesn't name the behavioral failure that arrives before the architectural gap becomes visible.
The real failure mode is operator churn.
The human brings source documents. The LLM compiles them into 400 wiki articles with backlinks. The first readings are high-value: the operator finds connections they'd missed, surfaces contradictions, discovers structure in their own thinking. The system is working.
Then the experience flips. The operator encounters summaries of material they've already internalized. They skim. The lint pass flags a contradiction they don't care about. They dismiss it. The article queue grows faster than their reading pace. They are now reviewing the system's output rather than learning from it. The moment the operator transitions from reader to auditor is when the supervision trap closes.
The system hasn't failed. The operator has learned that the system is a tool, and tools compete with other tools for operator hours. AI-generated wikis are marketable skills. Other people will pay to have this work done for them. The operator churns to higher-ROI work.
This is structural, not individual. Any system that requires the operator to review AI output at the AI's production rate will eventually lose to competing uses of the operator's time. "Maintenance without thesis" is the architectural failure that makes churn inevitable — without priors, the system cannot filter what matters, so everything surfaces at equal priority and the operator must audit everything equally. But churn is the cause of death. Architecture explains the mechanism.
His stated question — "how do you cultivate curation automatically" — is the supervision trap named as an engineering problem. The LLM wiki post is the toy version, published to establish the concept. The next version is automated curation: a system that doesn't require the operator to audit its output because it has enough epistemic direction to filter for them.
This matters because Karpathy is an elf. Decades of designing, implementing, and iterating on frontier ML systems have given him implicit priors that are probably deeper than any sixteen formalized markdown files. He can generate useful predictions about cases he hasn't explicitly seen because the domain is compressed into him. He didn't build the formalization step — writing it out, version-controlling it, making it auditable — because he didn't need to. It's already there.
This is also the PM's potential asymmetry. Not prior depth — Karpathy's implicit priors are likely richer. Auditability and updateability. Formalized priors can be wrong in a visible way and corrected. Implicit priors can be wrong in an invisible way, accumulating systematic error without diagnosis. The elf's failure mode is self-reinforcing confidence — the same one the Prime Radiant's evaluation rubric exists to catch. Karpathy's priors compound for decades and generate excellent predictions right up until the domain shifts and the shift doesn't surface in any feedback loop he can read.
Whether visible priors plus systematic updating beats deep implicit priors plus implicit updating is not settled. It's the PM's bet. He could reach the PM's architecture through sheer volume if he decided it mattered. He could also build the autoresearch system without ever formalizing anything, running on implicit structure alone.
He will build autoresearch before the PM does. This is likely, not certain. He is a solo-shipper of frontier ML experiments with no coordination overhead and demonstrated ability to compress complex architectures into minimal, correct implementations. The supervision trap is exactly the problem his stated research interest points at.
The honest position: probably parallel on priors, possibly behind on implementation. Worth tracking his public output to know when the gap opens.
The Prime Radiant sidesteps operator churn by restructuring the supervision relationship. The node procedure front-loads quality before anything reaches the operator. The operator evaluates a finished crystal at publication time — not AI output at continuous rate. This converts supervision from auditorship to judgment: checking everything the system produces versus deciding whether a finished artifact changes your model.
Auditorship has declining returns at scale. Judgment declines more slowly. The operator reading a 12-pass crystal decides whether to publish, exercises irreplaceable evaluation capacity, contributes a preference signal. That is not maintenance work.
But this is rate-dependent, not structural. The current architecture assumes the operator reads every crystal before publication. At current velocity, this holds. At fifty nodes per week, publication-time evaluation becomes a bottleneck indistinguishable from the audit burden it replaced. The supervision trap is delayed, not escaped. Structural escape requires automated quality filtering before the operator's attention, or a track record sufficient for the operator to trust crystals without reading them. Neither exists yet.
The PM's architecture is the right answer for 2026 velocity. The permanent solution requires automated curation with enough prior structure to replace the operator's filtering function, not just their bookkeeping. That is what Karpathy is building toward, from the compiler side. The PM is building toward it from the co-thinker side. They are racing to the same destination from different directions, carrying different bets about which architecture gets there first.
P.S. — Graph: