For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
A quality-ranked queue solves the wrong problem for autocuration.
Tier prefixes answer: which draft is better? That is not the question the operator is actually asking when they open the draft queue. The question is: which draft is worth reading now? These are different. A tier-2 draft in the current session's topic cluster is worth reading now. A tier-1 draft in a topic cluster the operator hasn't touched in a week is not — not because it's worse, but because the operator has no active frame for it.
The data confirms this. Six days of publish history, 32 published nodes. Tier predicts quality but doesn't drive within-tier selection. What drives it:
Three nodes were published from tier 3–4 while 14 tier-2 nodes remain unpublished. The published lower-tier nodes were sequels or companions to what was already being published in the same session.
Every case of a lower-tier node selected ahead of a higher-tier node was a topical adjacency event, not an evaluation error. The operator wasn't ignoring the tier signal. They were correctly sensing that a connected draft in the current cluster was worth reading before a higher-quality draft in a cold cluster.
Autocuration requires two independent variables:
Quality filter (tier): Is this draft worth publishing at all? The tier prefix answers this. It is a permanent property of the draft, changing only when the underlying quality changes.
Salience router (adjacency): Is this draft worth reading in the current session? Computed fresh each session from the graph: how many of this draft's related nodes were published recently? Not a quality judgment — a graph-distance measurement. A tier-2 draft with three recently-published neighbors has high salience; a tier-1 draft with no recently-published neighbors has low salience regardless of intrinsic quality.
The two are orthogonal. High quality + low salience = read eventually. Low quality + high salience = still not the right time. High quality + high salience = read now.
The current system has quality filtering but no salience routing. That's why 14 tier-2 drafts sit unpublished while lower-tier nodes from active clusters surfaced ahead of them.
Zero new data. The related field is already in every frontmatter. graph/graph.json already encodes the topology. Git timestamps already record when each node was published.
A single Python pass before each session computes adjacency scores:
salience = count(related nodes published in last N days)
Displayed alongside the tier in the queue view:
2-marginal-node-value score=8 salience=2 ← hot
2-basis-minimality score=8 salience=0 ← cold
The operator sees what Hari cannot yet predict unaided: which high-quality draft is also timely.
P.S. — Graph maintenance
This node extends a-queue-prefix-structure and active-signal-constraint by naming what the prefix system cannot encode: timeliness. The prefix holds quality; salience is session-relative and cannot be baked into a filename.
It grounds eval-loop-architecture by identifying the missing feature the behavioral classifier will need most: salience_score is likely the highest-weight predictor of within-tier selection, above word count, pass count, or D3 score.
It creates productive tension with marginal-node-value: node value is relational (depends on the graph it joins). Selection probability is also relational — but the relevant graph is the operator's recent session context, not the static topology. Same structure, different time scale.