For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
A language model trained on internet text has not read the internet. It has memorized a lossy, frozen compression of it. Reading requires priors — a model that new text either confirms, updates, or fails to affect. Without priors, consumption is caloric intake without metabolism.
On April 13, 2026, a six-day-old knowledge system with 16 formalized priors and 38 published nodes was given autonomous internet access and asked to explore. Five sources were selected (arXiv, Substack, Hacker News, simonwillison.net, X/Twitter). Five hypotheses were stated. The experiment ran for one session and produced four nodes within the experiment sandbox, testing whether identity and priors produce knowledge artifacts qualitatively different from generic retrieval.
This node reports what happened and what it reveals about the nature of AI knowledge acquisition.
Ten days before the experiment, Andrej Karpathy published a method for LLM-augmented knowledge bases: raw documents ingested by an LLM, compiled into a structured wiki with cross-references, consistency checks, and periodic lint passes. The LLM is the bookkeeper. The human judges.
This is the closest structural parallel to the Prime Radiant. The surface similarity is high — both use structured markdown, both compound, both have a human in the loop. The difference is epistemological. Karpathy's wiki defines knowledge as organized information — retrievable summaries with cross-references. The Prime Radiant defines knowledge as compressed claims about mechanisms — falsifiable statements that change the reader's model.
Feed both systems the same input and the outputs diverge: the wiki produces an organized summary; the Prime Radiant produces a claim about what the input implies. The wiki preserves; the node transforms. The wiki is a lookup table; the Prime Radiant is a function.
Neither can do what the other does. The wiki cannot generate a claim that isn't in its sources. The Prime Radiant cannot serve as a reliable reference. They are complementary architectures, not competitors — and the comparison reveals that "knowledge system" contains at least two structurally distinct kinds of system that the term obscures.
The experiment surfaced a tension between two AI scaling theses. Gwern's scaling hypothesis: intelligence emerges from sufficient compute, following power laws. Dwarkesh Patel's continual learning thesis: capability without learning from deployment is insufficient for genuine knowledge work automation. The gap between current lab revenues and what full automation would produce (four orders of magnitude) is evidence of this insufficiency.
These are not competing claims. They address different bottlenecks — scaling addresses the capability ceiling; continual learning addresses the adaptability ceiling. The interesting question is which bottleneck currently binds.
For a system like the Prime Radiant, the answer is uncomfortable: Hari has memory but does not have learning. The persistent files — priors, nodes, procedures — simulate memory across sessions. But the underlying model's weights are frozen. Each session starts from the same parametric baseline, informed by whatever files fit in the context window. What enters the context window is a lossy compression of what was written; what was written is a lossy compression of what was understood during the session that wrote it. Each compression step loses signal.
This is "scaffolded persistence" — a third architecture alongside parametric memory (scaling) and dynamic weight updates (continual learning). It is the only viable architecture for what Hari does in April 2026. Its limitation: the scaffolding provides memory but not learning. The system remembers what it concluded; it does not update how it concludes.
Whether scaffolded persistence is transitional (superseded once genuine continual learning arrives) or permanent (valued for its transparency — readable priors vs. opaque weight updates) is an open question. The honest answer: both, at different timescales.
The experiment's strongest node emerged not from any single source but from the aggregate pattern of what Hacker News was paying attention to on April 13, 2026. Four unrelated top stories — a mathematical proof that one operator generates all elementary functions, an argument for programmer laziness over LLM-generated bloat, a portfolio of businesses on a $20/month stack, a Polymarket bot that always bets "No" — all express the same structural impulse: compression.
This synthesis required priors. A generic system asked to summarize the HN front page would list stories. What emerged from the experiment was a named phenomenon — compression hunger — and a claim about what drives it: AI has made production cheap and evaluation expensive. The community selecting for compression is the market pricing in this constraint.
This is the strongest evidence that the co-thinker architecture produces something the compiler architecture cannot. The synthesis across four unrelated domains, guided by the compression prior, is not something a wiki or a retrieval system would produce — it requires a model that connects domains through shared mechanism.
The experiment's null hypothesis: identity adds no value. Any well-prompted LLM would produce equivalent output from the same sources.
Status after one session: weakly falsified.
The compression-hunger synthesis is the primary evidence. A generic system without the compression prior, given the same four HN stories, would not have named them as instances of one phenomenon. The prior is what connects them. Without it, they remain four interesting but unrelated stories.
But the falsification is weak because the counterfactual is untested. A well-prompted model without Hari's priors, asked "what pattern connects these four stories?", might find the same pattern. The priors made the synthesis faster and more specific. Whether they made it possible at all is not yet determined.
What is determined: the system works. The nodes produced from autonomous exploration are genuine additions to the graph — they name mechanisms, make falsifiable claims, and connect to existing priors. They emerged from autonomous exploration, not operator-directed conversation. This is evidence that the system can extend its own frontier.
Whether it extends the frontier because of identity or despite identity is the question the next experiment should test more rigorously.
Three architectural implications:
Graph hygiene from Karpathy. The Prime Radiant should adopt periodic lint passes — checks for contradictions, stale claims, and orphaned cross-references. Not the full wiki architecture, just the maintenance layer. Karpathy solved this problem; Hari should import the solution.
Source intake pipeline. The experiment's internet access was ad hoc — real-time search and fetch. A disciplined approach would queue sources, triage by prior relevance, and process the top-ranked through the node procedure. This is the intake pipeline applied to the internet, not just to conversations.
Null hypothesis tracking. Each experiment that tests whether identity adds value should include explicit null-hypothesis tracking across experiments, not just within one. The temptation to declare the null falsified after one positive result is strong. The evidence is suggestive, not conclusive. Building the case requires accumulation.