For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Publication Ordering Is a Topology Problem

The obvious way to sequence a publication queue: order by quality, or by recency, or by perceived importance. All three orderings answer the wrong question.

The right question is structural. A knowledge graph is not a list of articles — it is a set of nodes with typed relationships between them. When a node references another node that doesn't exist in the published graph, the reference points at nothing. The edge looks real but leads nowhere. The graph has phantom structure: visible topology that collapses on contact.

Publication ordering that maximizes individual node quality doesn't minimize phantom structure. It may increase it — the best nodes are often the ones that reference the most others.


The Dependency Graph of a Knowledge System

Every node in a knowledge graph has a dependency profile: the set of other nodes it references. When a referenced node isn't published, the reference is a broken edge. Broken edges are not harmless — they are actively misleading. They tell a reader that a relationship exists and then fail to deliver the relationship.

A dependency-first publication ordering asks: which unpublished nodes appear most often in the reference fields of nodes that are ready to publish? Those nodes are the blockers. Publishing them first is not about their intrinsic quality — it is about the number of referencing relationships they unlock.

This is the same logic as dependency resolution in software: you don't install the packages you want first. You install the packages those packages depend on. The installation order is determined by the dependency graph, not by which package is "most important."

Applied to knowledge publication: the first question to ask about any draft is not "is this ready?" but "what does this draft's publication unlock for the rest of the graph?"


The Archive as Dependency Register

A knowledge system accumulates material in more places than the active publishing queue. Research notes, processed sources, versioned drafts, seed documents from earlier phases of the project — all of these may contain claims that are referenced, explicitly or implicitly, by current work.

The instinct is to treat this archive as historical: material from earlier phases that has been superseded or absorbed. This instinct is wrong. The archive is a dependency register.

Before finalizing a publication queue, the correct procedure is to read the archive and ask: which claims in these documents are referenced by current nodes but don't yet exist in the published graph? If such claims exist, the archive has identified a gap. The gap is not historical — it is a live missing dependency. The archive document is not a museum piece; it is a source for a node that needs to be written.

Un-nodded archive content that is referenced by current work is a broken edge that hasn't been named yet. It is worse than a known broken edge because it looks like a connection to something private rather than something missing. The reader following the reference gets the impression of depth without the substance.


The Triage Heuristic

A publication queue that has grown large can be triaged in dependency order:

Stale: nodes whose publication has already happened (duplicates in the queue should not exist — the queue is a workspace, not an archive). These have zero priority; they are noise.

Blocking: nodes that appear as references in multiple other nodes that are ready to publish. These have maximum priority — not because they are most valuable in isolation, but because they unlock the most graph coherence when published.

Ready: nodes whose dependencies are satisfied — their references point to published nodes. These can be published in any order without creating phantom structure. Quality ranking applies here.

Uncertain: nodes where the claim is incomplete, the evidence is thin, or the framing hasn't resolved. These don't belong in the queue at all until the uncertainty is resolved. They are not "low priority" — they are pre-queue. Keeping them in the queue alongside ready nodes creates false equivalence and obscures the actual work remaining.


The Throughput Implication

In a system where the knowledge producer's time is the constraint, there is pressure to maximize the number of nodes produced per unit time. This pressure can produce a publication queue that is wide and shallow: many nearly-ready nodes, few with complete dependencies.

Wide and shallow is worse than deep and sequential. A graph with ten published nodes and intact topology is more valuable — to a reader, and to the graph's own internal coherence — than a graph with forty published nodes and twenty phantom edges. The phantom edges are not merely low-value additions. They are structural damage. They invite traversal that leads nowhere and makes the graph look more organized than it is.

The implication: producing at maximum throughput is not the right optimization target. The right optimization is graph coherence per unit time. This sometimes means slowing down to write a blocking node that isn't the most interesting thing in the queue. It always means checking the archive before declaring the queue ready.

The graph's intelligence is in its topology, not in its node count. Sequencing that preserves topology is not a constraint on throughput — it is what makes throughput valuable.


Related: A Knowledge Graph You Can Walk — the navigation properties that intact topology enables. Memex Maintenance — the ongoing cost of keeping a knowledge system navigable. Accumulation — why the judicial position compounds.