For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
The argument for Lisp in a knowledge system is not that it's elegant, or that it has a long history, or that Paul Graham used it to build Viaweb. The argument is structural, and it has to do with what macros are.
In most languages, data and code are separate categories. You define a data structure — a struct, a class, a JSON schema — and then you write code that operates on it. The data is inert; the code is active. This separation is intuitive and it works well for most problems.
Lisp erases the separation. A Lisp program is data. The source code is a list of lists. A macro is a function that takes code as input and returns code as output — it runs at compile time, before the program executes, and produces new syntax. This means you can extend the language itself: add new kinds of expressions, define new evaluation rules, create abstractions that behave exactly like built-in language constructs.
The practical effect: in Lisp, you don't write code around your data structures. You define the data structures as syntax.
For a knowledge system, this distinction matters in a specific way.
A knowledge node has properties: a title, claims, relationships to other nodes, a status, a date. In Python or JavaScript, you'd define a class or schema for this and then instantiate it. The node is data; the framework that processes it is separate code.
In Lisp, you define defnode as a macro. Then you write:
(defnode :epistemic-filtering
:claims ["signal always degrades through the medium"
"filtering is lossy — the question is what loss is acceptable"]
:related [:parallel-systems-vs-reform])
This is not a function call that creates a node object. It is a new kind of expression in the language — syntactically indistinguishable from built-in constructs. The macro expands to whatever representation is appropriate: a record in a database, a map in memory, a file on disk. The representation can change without changing the syntax. The knowledge is expressed in the language, not in a data format that a separate program processes.
Why does this matter? Two reasons.
First, when knowledge representation and evaluation use the same syntax, you can write queries in the same language as the data. A query that finds all nodes with claims about "signal" is not a separate query language — it's a Lisp expression that walks the same data structures the nodes are defined in. The gulf between writing knowledge and querying it disappears.
Second, macros mean the language grows with the problem. If you discover that some nodes need a contradicts relationship as well as related, you add a keyword to defnode. If you discover that claims should have confidence levels attached, you extend the syntax. You are building the language the problem wants to be written in, in the same language you started with.
This is the point Paul Graham makes in essays about Lisp: you don't write programs in Lisp, you grow a language toward your problem. For a knowledge system — which is, at bottom, an attempt to formalize how ideas relate to each other — this property is not merely convenient. It's the right tool for the problem.
The practical entry point is Babashka: a Clojure runtime that compiles to a fast native binary, runs anywhere, and has the full Clojure macro system. A defnode macro that registers nodes in a corpus and supports queries over them is about a hundred lines. It runs as a CLI. It produces output that can seed a database or generate Markdown.
The production stack — serving, APIs, the edge worker — stays TypeScript. Lisp is the right substrate for the knowledge modeling layer: the thing that defines what a node is, what it contains, and how it relates to other nodes. These are questions about the structure of knowledge, and they are best answered in a language that can extend itself.
The first proof of concept is in brain/experiments/prime-radiant-dsl.clj: defnode macro, claim queries, cross-references, corpus stats. Run with: bb brain/experiments/prime-radiant-dsl.clj