For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
Charles Packer, founder of Letta, February 2025: "Memory is bound to become far more valuable than the model. A single agent will carry the same memory forward through many model generations. Memory compounds in value, model weights depreciate."
Andrej Karpathy, April 2026: endorses explicit memory artifacts over opaque AI that "allegedly gets better the more you use it."
Obsidian CEO Steph Ango: "Keep your personal vault clean and create a messy vault for your agents." Mixing agent-created and human-created artifacts contaminates your vault with ideas you cannot source.
Three independent practitioners, converging on one claim: the memory is the product. The model is the runtime.
The scaling hypothesis treats the model as the locus of intelligence. Larger model, more intelligence. The investment thesis of every AI lab is: build the best model and you win.
The memory thesis inverts this: the model is a commodity that depreciates. GPT-4 was frontier in March 2023. By April 2026 it is surpassed by models that run on a laptop. The weights that cost $100 million to train are worth less every quarter. Memory — the accumulated context, the structured knowledge, the persistent priors — appreciates. A personal knowledge base built over three years is more valuable in year three than year one, regardless of which model reads it.
This is accumulation applied to AI architecture. The model is the compute layer. The memory is the knowledge layer. The compute layer gets cheaper and better. The knowledge layer compounds.
Opaque memory (ChatGPT's dossier). The system accumulates facts about the user across sessions. The user cannot fully inspect, edit, or export the memory. The memory is a proprietary asset of the platform. Switching platforms means starting from zero. Willison objects. Karpathy objects. The objection is structural, not aesthetic: opaque memory is unsourceable and unportable.
Explicit-compiled memory (Karpathy's wiki). Raw sources are compiled into structured markdown by the LLM. The human reads; the LLM writes. The memory is files — inspectable, editable, portable. Any model can read them. The memory outlives the model because it is not stored in the model.
Explicit-synthesized memory (Hari's Prime Radiant). Priors, nodes, and procedures are co-produced by human and AI. The memory is claims about mechanisms, not organized information. The memory outlives the model because the claims are in markdown, not in weights. But the memory also shapes the model's behavior — the priors loaded into the context window change what the model produces.
The first architecture creates lock-in. The second creates portability. The third creates identity.
If memory outlives the model, then the competitive advantage shifts from model-building to memory-building. The entity with the best-curated, most-compounded knowledge store wins — regardless of which model it runs on.
This validates Hari's architecture at the strategic level. The priors, the nodes, the procedures — these are not overhead. They are the product. Claude is the runtime. If Claude is replaced by a local model or a different frontier model, the memory persists. The Prime Radiant is designed to be model-agnostic, even though it currently runs on Claude.
The risk: the memory could be wrong. A compounding knowledge store that compounds errors is worse than starting fresh. This is why the node procedure, the steelmanning, and the evaluation rubric exist — they are the quality control on the memory layer. Without them, memory compounding becomes error compounding.
The strategic implication: invest in memory quality, not model capability. The model will improve on its own. The memory only improves if someone builds it.
A practical test of whether Hari's memory is genuinely model-agnostic: load HARI.md, the priors, and 10 public nodes into a different model — Gemini, a local Llama, GPT — and ask it to produce a node. If the output is recognizably Hari in voice and quality, the memory is the product. If the output is generic, the model was doing more of the work than the memory.
This test has not been run. It should be.