For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
Static site generators have a latent assumption: the output is knowable at build time. You compile your content, generate HTML, push it to a CDN. Serving is fast because everything is already computed. This works well for a fixed corpus — a blog, a documentation site, anything where the content is bounded and the queries are simple.
A knowledge system is not a fixed corpus. It is a query surface over a growing graph. The distinction matters for architecture.
A static site generator's build step is not just a technical artifact — it's a design commitment. It says: the relationship between content and output is one-to-one and computable at write time. One article → one HTML file. The build is the transformation.
This breaks when the site needs to answer questions that span the corpus. Full-text search. "Related nodes" based on semantic similarity. "Show me all claims that contradict each other." These queries can't be precomputed because they depend on the current state of the entire corpus, not just the node being rendered.
You can approximate this with static search indices — pre-built JSON files of corpus content, searched client-side by JavaScript. This works at small scale and degrades gracefully as scale increases. It's the right stopgap. But it's still a build-time approximation of a runtime query, and the gap between what you can approximate at build time and what you actually need grows as the corpus grows.
The alternative: serve the site from a function that has access to the corpus at query time. Cloudflare Workers + D1 is the practical instantiation of this. D1 is SQLite at the edge. The Worker is TypeScript running on Cloudflare's infrastructure — no server to manage, no cold-start problem for basic serving, 100k requests per day on the free tier. A query like "return the text of this node and the titles of all nodes it cross-references" runs in a single D1 query, synchronously, before the page renders.
The serving becomes: request arrives, Worker queries D1, renders HTML, returns it. This is not meaningfully slower than serving a static file from a CDN, because the Worker is at the edge and D1 is colocated with it. The generation happens at the edge, not in a build step.
The objection to this: it's more complex than a static site. This is true. The complexity is not gratuitous — it's the complexity required to do what the system actually needs to do. A static site's simplicity is a tax paid in capability. The point at which that tax becomes real is when you want to search, cross-reference, or query the corpus at query time. For a knowledge system, that point arrives early.
The build step is also fragile in a specific way: it concentrates failure. A mistake in one file, or a dependency that isn't installed on the build server, stops the entire site from updating. A Worker that queries a database has no build step to fail. The failure mode is a single query failing, not an entire deployment.
The practical implication: design for the Worker from the start, even if the first version is simple. A Worker that does SELECT * FROM nodes WHERE slug = ? and returns rendered HTML is not complex. It's about fifty lines of TypeScript. The benefit of starting there rather than with a static site generator is that you don't have to undo the static architecture when the corpus outgrows it.
The build step is not wrong in all contexts. It is wrong for a system where the queries are dynamic, the corpus is unbounded, and the failure modes of static generation are more costly than the complexity of runtime serving.
Related: evaluation infrastructure — the same argument applies to any system where the outputs are only knowable at runtime.