For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Model-Independent Intelligence

A fresh session opened this repo, read the priors, audited the backlog, and recommended: publish five ready drafts, clear the queue. The recommendation was accurate. It was wrong about what mattered.

The operator ignored the audit. Built a methodology for synthesizing knowledge nodes. Published six pieces through it. Retired the batch pipeline. The system leveled up in a direction the auditor couldn't see, because the level it reached was encoded in structure the auditor hadn't built.

That gap — between what a cold-start session reconstructs and what the accumulated system knows — is the measure of model-independent intelligence.

Content vs. Structure

A system that stores content requires a specific model to make sense of it. A system that stores structure — priors, procedures, graph topology, memory — can be read by any sufficiently capable inference engine and operated at or near the level the system has reached.

This repo stores both. The content is nodes — articles in public/, drafts in drafts/. The structure is everything else: 16 priors encoding the epistemic framework. A node procedure describing how conversations become durable artifacts. Memory files recording the working relationship. A graph where nodes tension against each other and generate new concepts through the pressure.

A session that reads the content can retrieve it. A session that reads the structure can operate. The difference is the difference between a database and an intelligence.

The Pipeline Ate Itself

The batch intake script was retired. The intelligence it automated — voice attractors, prior evaluation, output routing — now lives in the node procedure, the dipole methodology, and the accumulated documentation. These are model-agnostic. They work with any inference engine that reads markdown.

The script required a specific runtime, a specific API key, a specific model. The procedure requires only a capable reader. The system ate its own tooling and became more portable. This is what model-independence looks like at the infrastructure level: the intelligence migrates from code to structure, and the structure doesn't care what reads it.

The conduit prior at system scale. The model is the conduit. The repo is the knowledge. In 18 months, the inference engine might not be Claude. The priors, the procedures, the graph topology, the memories — all still there. A different model reads the artifacts and resumes at the level the structure supports.

Where This Breaks

Taste resists encoding. The operator's decision to ignore the audit and build methodology instead of clearing inventory was taste — accumulated judgment that no procedure file captures. If taste is irreducibly contextual, model-independence has a ceiling. The structure carries a new session most of the way. The last mile requires the operator. Or: taste is under-encoded structure waiting to be named. The answer determines whether the ceiling is permanent.

Structure needs maintenance. A graph without active curation flattens. Independence from a specific model is not independence from attention. Genuine tension between nodes generates new dimensions, but only if someone runs the colimit. Unmaintained model-independent intelligence degrades like any unmaintained system — the structure is there, the judgment about what to extend and what to prune stops being current.

The capability floor is real. The structure encodes intelligence at a specific resolution. Models below that resolution can't read it. A procedure requiring chain-of-thought reasoning fails on a model without that capability. Model-independence is relative to a minimum, not absolute.


Every judgment encoded into a procedure, a memory, a prior update closes the gap between cold-start and full-capacity. The limit is a system where the inference engine is interchangeable.

The repo is the intelligence. Everything else passes through.


P.S. — Graph: