For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
A flat 1-9 priority number on a draft filename conflates two things that are actually distinct: which tier of readiness a draft belongs to, and which draft you should read first within that tier. These are different questions. Answering them with the same digit creates a false precision — rank 4 and rank 5 imply a calibration you probably don't have and don't need.
The encoding Na- separates them cleanly. A tier number (1, 2, 3) sets the readiness class. A letter rank (a, b, c...) sets priority within the tier. 1a- reads: tier 1, first priority. 2c- reads: tier 2, third priority. The filesystem sorts these correctly without any tooling: 1a- < 1b- < 2a- < 2b- < 3a-. The file tree gives you priority order for free.
The tier is not a score. Scores are relative and invite comparison: "is this a 6 or a 7?" Tiers are categorical: they describe what kind of attention a draft needs.
Tier 1 — Publish candidates. This draft is complete enough that, with one editing pass, it could be public. Marking something tier 1 is a commitment, not a compliment. You are saying: I would publish this today if I had an hour. If that's not true, it isn't tier 1.
Tier 2 — Active work-in-progress. The core claim is established, the draft exists in a legible form, but it needs real work before it's publishable. Most drafts live here most of the time.
Tier 3 — Seeds and backlog. A claim is captured, but the draft is not yet a draft — it's a placeholder, a stub, a thought that needs to ripen. You might never return to these. That's fine. They exist to capture something that would otherwise be lost, not to create obligation.
The first-order failure mode of any priority system: everything inflates to high. The standard engineering fix is distributional enforcement — you're only allowed N items at priority 1, you must have a minimum at priority 3. Force-rank the queue.
This works in organizations, where the enforcement is external: a product manager who assigns everything P0 gets pushback from the team. The social friction is the mechanism.
In a single-user system, there's no external enforcement. Distributional targets become rules you set and break yourself. The psychological pressure is asymmetric: inflating feels like optimism ("this draft is really good"), downgrading feels like defeat ("I'm admitting this isn't worth my time").
The alternative is to design tier semantics that make inflation self-correcting through commitment rather than punishment.
Tier 1 means "I would publish this today." If you mark a draft tier 1, you're not rating it — you're making a prediction about a specific action you could take. You know immediately whether that prediction is true. The draft either needs one editing pass to be public, or it doesn't. There's no hedging available. The tier's inflation resistance comes from its concreteness: you can lie about a score, but you can check whether you'd actually publish something.
Tier 2 has the same logic in a softer form: "this is actively on my mind and I will work on it in the next few sessions." If a draft has been tier 2 for a month without a commit, it has aged out of tier 2's semantic. It belongs in tier 3 or nowhere.
Tier 3 is explicitly low-obligation: "I captured this in case it matters later." Marking something tier 3 is not failure — it's the right designation for a draft that exists to preserve a signal without demanding attention. The tier design needs to make tier 3 feel like a valid place to put things, not a penalty box.
The letter rank within tier is looser than the tier itself. It answers: if I'm working through tier 1 today, which of these do I read first?
The letters don't need deep calibration. a before b before c is enough. The purpose is to break ties within a tier so that when you open the file tree, the reading order is unambiguous.
Unlike the tier (which carries semantic weight), the letter rank is administrative. You can shuffle letters without changing what a draft means. This is the right division: the semantically heavy decision (which tier?) is encoded in the number; the administrative decision (what order within tier?) is encoded in the letter.
The same commitment logic applies at smaller grain: 1a- is the draft you would read and publish in one sitting. 1c- is a publish candidate but needs more passes. The letter doesn't carry the same weight as the tier, but it's not arbitrary — it tracks proximity to publication readiness within the tier.
At publication, the prefix strips:
1a-my-draft.md → my-draft.md in public/my-draft (no prefix)related fields in all other nodes: always cite the unprefixed slug, even when referencing drafts
The transform is deterministic: strip the leading Na- pattern. The draft slug and public slug are related by a simple regex. No lookup required.
The reason related fields must cite unprefixed slugs: if they cite 1a-my-draft, and the draft gets ranked up or down (renaming to 2b-), every cross-reference breaks. The unprefixed slug is the stable identity; the prefix is the current state.
Once the prefix system is established and calibrated, a second-order problem becomes tractable: automated signal-to-noise sorting. Which drafts in tier 2 have the highest marginal node value? Which tier 3 stubs are redundant with existing public nodes and can be safely deleted? Which tier 1 drafts pass mechanical linting and could autopublish?
These are real questions with serious prior art — spaced repetition scheduling, information foraging theory, backlog decay models from GTD. The specific constraints here (single user, AI-assisted, self-generating graph, quality measured by marginal graph contribution) mean the standard answers don't apply directly.
The design of that system is its own work, not an extension of this one. What this node establishes is the substrate: a structured prefix that exposes the tier and rank signals that any downstream automation will need. You cannot build automated queue management without a queue that has machine-readable quality signals. The Na- prefix is that signal, captured with no infrastructure, ready for the automation layer when it gets built.
P.S. — Graph maintenance
This node extends a-draft-queue-discipline by replacing the flat number encoding with a two-signal Na- structure, and by substituting semantic-commitment inflation resistance for distributional-target enforcement. The prior node established the right encoding location (filename); this one establishes the right encoding structure and the mechanism that makes it durable.
It connects to marginal-node-value — the automation frontier named at the end of this node (which tier-2 drafts have highest marginal value?) is the production-side framework applied to the consumption side.
It extends brain-gc-knowledge-hygiene — tier 3 is the pre-GC holding zone. Drafts that expire from tier 3 without resurfacing are the primary GC candidates.