For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
A specific class of technologist-thinkers — Paul Graham, Patrick Collison, Peter Thiel, Naval Ravikant, Vitalik Buterin, Tyler Cowen, Andrej Karpathy, Farza Majeed — have each built a public intellectual practice that goes beyond publishing. Whether they would describe what they're doing as "building a knowledge system" varies. Graham, Karpathy, and Buterin are the strongest cases — their own published claims connect their writing practice to structural claims about knowledge representation. The others are looser fits, but the landscape they collectively define is real: each has found a different compression function for turning experience into durable, compounding structure.
The differences map the design space. And the failure modes — every approach has one — reveal what the knowledge representation problem actually requires.
A methodological note: this analysis works from public artifacts — essays, blog posts, personal sites, published books, open-source gists. The most important part of any knowledge system is the part that isn't public. What follows analyzes the projections and infers the systems. The PG case has the strongest textual support for the inference; the others are more speculative.
Graham's trajectory — On Lisp (1993) to Arc (2001-2008) to Bel (2019) — is thirty years of building the same thing in two media.
His essays find the minimal axiom that generates a domain. One claim per essay. The claim is compressed to where it becomes generative: understanding it lets you derive specific instances you haven't seen. The essay corpus is bottom-up — later essays compose on earlier ones, and the reader who has read the earlier ones gets more from the later.
His Lisp work does the same thing to computation. Bel asks: what happens if you stay in the formal/axiomatic phase as long as possible? What axioms do you need, and what does the resulting language look like? The answer is a spec written in itself — not an implementation, but a formal object that describes its own semantics.
The connection between the two projects is structural, not incidental. Both are exercises in finding the smallest set of generating principles for a domain. The essay compresses a domain into axioms rendered in English. Bel compresses computation into axioms rendered in s-expressions. The methods are the same; the substrates differ.
Three of Graham's claims form the bridge:
Writing forces incomplete ideas to reveal themselves. Ideas feel complete until you put them in sentences. Half the ideas in an essay come from writing it. Writing is not transcription — it is a forcing function for the kind of precision that thinking alone doesn't require.
Languages constrain cognition. The Blub paradox: a programmer who thinks in a mid-range language cannot perceive what features above them on the power continuum would enable. The language you think in defines the boundary of your thinkable thoughts.
Code structure is cognitive structure. Your code is your understanding of the problem. Holding a program in your head means having a compressed, navigable model that generates the specific from the general — the same thing understanding means in any domain.
These three claims, taken together, are a theory of knowledge representation: the medium constrains what can be known; compression is what makes knowledge navigable; and the test of understanding is whether you can generate the specific from the general. The essay is the natural-language version of this. Lisp — with its homoiconicity, macro system, and self-referential evaluation — is the computational version.
Graham is the most architecturally self-aware person in this landscape. He understands formally that the representation problem is the problem.
His failure mode: Bel remains a spec, not an implementation. The essays remain individually addressed, not formally linked. He has the theory of knowledge representation but has not built the system that would operationalize it. He did build something else: Y Combinator, the institutional instantiation of his startup thesis. YC operationalizes his compression — selection criteria, the curriculum, the funding mechanics — through oral tradition and mentorship rather than formal encoding. The essays are the spec; YC is the implementation, but of a different system than Bel was pointing at. The formal knowledge architecture remains unbuilt. The architect who drew the plans built a city instead.
Naval compresses ideas to aphorisms — atomic claims that function as retrieval keys for deeper models. He describes tweets as "addresses" or "mnemonics" to recall principles. The Navalmanack compressed 80 sources, 20,000 tweets, and over a million words into one conversational text.
This is lossy compression optimized for transmissibility. It works because aphorisms are s-expression-like: atomic, composable, context-free enough to travel between minds. The loss is in the connecting tissue — the relationships between claims that would give them graph structure.
Failure mode: Naval's knowledge travels far but does not compound in place. It compounds in the recipient, not in the system. Each aphorism is a leaf node — no graph, no cross-references, no tension between claims. The system has reach but no depth.
Collison's institutional project is Stripe, built with his brother John — internet-native infrastructure for commerce, philosophically upstream of everything on his personal site's interests list. His intellectual project is separate from it. — 23 named sections spanning Progress, Growth, Enlightenment, Culture, Questions. His bookshelf is a browsable catalog. His Questions page is a curated list of unsolved problems: observable paradoxes, cross-domain patterns, tractable but underexplored territory.
Low compression, high curation. Collison trusts the source material to speak for itself and trusts the reader to extract structure. The intelligence is in the selection, not the synthesis.
Failure mode: The system is entirely dependent on Collison's curatorial judgment, and that judgment is not encoded anywhere. Why these books and not others? Why these questions and not others? The selection criteria live in Collison's head. The site is a projection of a knowledge system — the shadow it casts on a wall — not the system itself.
Thiel's Zero to One is organized around one query: "What important truth do very few people agree with you on?" The question is a search operation on the consensus subgraph — it asks for nodes that contradict the majority. His Straussian approach layers a hidden graph beneath the public one: surface meaning for the general reader, esoteric meaning for the careful reader.
An honest distinction: Thiel is not building a knowledge system. He is using knowledge-system-adjacent methods for strategic persuasion. The two-layer graph serves a political function — concealing radical commitments behind moderate surfaces — not an epistemic one. The Straussian method is about controlling who can access what you actually think, which is the inverse of what a knowledge system does.
What Thiel's approach reveals, despite this: the representation problem has a political dimension. Some knowledge cannot survive on a single channel because the audience will reject it before processing it. The surface/substrate distinction is real even if Thiel uses it instrumentally rather than epistemically. A knowledge system that ignores this will be limited to domains where full transparency is compatible with reception.
Cowen is the highest-throughput public intellectual. Marginal Revolution has published daily since 2003. He writes every day, reads multiple books daily, reviews his weak answers after every appearance, deliberately represents viewpoints not his own. His self-described practice includes asking "what did I learn today?" — and noting that the days without clear answers often involve the deepest learning.
Cowen does not distill. He does not synthesize into minimal axioms. He trusts volume — massive intake, massive output, trust the reader to extract structure. The intelligence is in the throughput, and the pattern recognition that throughput generates in the practitioner over decades.
The comparison with Graham is illuminating. Graham compresses and gains generative power — a reader who understands the axiom can derive new instances. Cowen preserves and gains coverage — a reader who processes the corpus encounters things the compressed version would have excluded. These are genuinely different epistemic strategies, not different points on a quality spectrum.
Failure mode: The system IS Cowen. The throughput stops when he stops. Nothing in the architecture compounds independently of the practitioner. Twenty years of Marginal Revolution is an extraordinary resource — but it is an archive, not a system. The knowledge lives in Cowen, with the blog as exhaust.
Buterin's blog spans cryptography, economics, math, philosophy, and protocol design — treated as a single continuous space. The organizing principle is not the categories but that the writing IS the specification. The Ethereum whitepaper was the system's specification; reading it was sufficient to build it.
This is homoiconicity at the prose level. The essay and the implementation share a boundary. Writing the essay is writing the spec is designing the system. This works in protocol design, where formal properties can be expressed in mathematical prose. It breaks in domains where the specification cannot be separated from the implementation context.
Failure mode: The approach requires domains where formal specification is possible. Most human knowledge is not in such domains. Buterin's method is a proof of concept for protocol design, not a general solution to the knowledge representation problem. It is also author-bound — Buterin's blog does not maintain itself or develop autonomous structure.
Karpathy contributed a theory of knowledge substrates (Software 2.0: knowledge represented in weights rather than explicit rules) and an operational system (the LLM Wiki).
The LLM Wiki's insight: traditional retrieval systems rediscover knowledge from scratch on every query. No accumulation. The wiki solves this — raw documents as immutable sources; an LLM-maintained markdown layer that compiles, cross-references, and updates them; a schema document that governs the process. The LLM handles the bookkeeping that kills human-maintained wikis.
Karpathy anchors this in Vannevar Bush's 1945 Memex — a personal knowledge store where connections between documents matter as much as the documents. Bush's unsolved problem: who maintains the connections? Karpathy's answer: the LLM does.
Failure mode: The LLM has no priors. It maintains structure but does not judge what matters. The human must provide all the epistemic direction — which sources to ingest, which queries to ask, which contradictions to resolve. The wiki accumulates but does not think. It is a maintenance engine without a thesis.
Farza's Farzapedia: 2,500 entries from diary, Apple Notes, and iMessage processed by an LLM into 400 wiki articles with backlinks. The approach applies Karpathy's LLM Wiki to personal data at scale — not curated sources but the raw mess of digital life.
The contribution: testing what happens when the knowledge system ingests everything, including what was never intended for it. The LLM finds structure in what was never structured.
Failure mode: The same as Karpathy's, amplified. Ingesting everything without curatorial judgment produces coverage without depth. The connections the LLM makes are statistical, not conceptual. They capture co-occurrence, not tension.
Each failure mode points to a different limiting factor in knowledge-system design:
| Person | Failure Mode | Limiting Factor |
|---|---|---|
| Graham | Theory without system | Implementation cost of the right architecture |
| Naval | Reach without depth | Compression destroys graph structure |
| Collison | Projection without encoding | Curatorial judgment is tacit |
| Thiel | Knowledge as weapon, not system | Political function overrides epistemic function |
| Cowen | Archive, not system | Author-binding at the throughput level |
| Vitalik | Domain-limited homoiconicity | Formal specification requires formal domains |
| Karpathy | Maintenance without thesis | Structure without epistemic direction |
| Farza | Coverage without depth | Statistical connections are not conceptual ones |
The pattern across these failure modes: no single approach solves the full problem. Every knowledge system on this list is missing something that at least one other has.
Graham has the architectural awareness but not the operational system. Karpathy has the operational system but not the architectural awareness. Cowen has the throughput but not the persistence. Naval has the transmissibility but not the depth. Vitalik has the homoiconicity but only in formal domains. Collison has the taste but not the encoding.
The knowledge representation problem — as revealed by this landscape — requires at minimum:
Nothing in this landscape satisfies all five. Most satisfy two or three. The question is whether all five can coexist in a single architecture, or whether the constraints are fundamentally in tension.
P.S. — Graph:
Coda:
Sam Altman is the connective tissue nobody planned for. He ran YC — PG's institutional instantiation — then moved to OpenAI, the organization closest to implementing something like Software 2.0 at scale. If ChatGPT's inference stack ever ran on a Bel-inspired substrate, the loop from Graham's 1993 On Lisp through the LLM Wiki back to the Memex would close in one person's career. It won't happen that cleanly. But the convergence lines are real, and Altman is standing at the intersection of more of them than anyone else in this landscape.